doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.08691 | 46 | Attention forward + backward speed (H100 80GB SXM5)
mmm Pytorch mm FlashAttention 20 326 335 338 lm FlashAttention-2 F300 cy ra 248 9 z F 200 = 160 167 no] 14s 3 13 g 2 128? 2 3. Gb 1004 33) or 512 1k 2k 4k 8k 16k Sequence length
(a) Without causal mask, head dimension 64
(b) Without causal mask, head dimension 128
Attention forward + backward speed (H100 80GB SXM5)
Attention forward + backward speed (H100 80GB SXM5)
mmm Pytorch mmm FlashAttention mm FlashAttention-2 @ 300 373 284 a 257 a a2 ° E 200 392 > 1 a5 3 141 2 13 134 10 & 100 26) 29) a1) 32 32 or 512 1k 2k 4k 8k 16k Sequence length
mmm Pytorch mmm FlashAttention 228 mm FlashAttention-2 308 @ 300 294 a 265 a Go 221 E 200 > 163 g 10 10% 121 33 3 100 pa so 57 61] 63) 4g} or 512 1k 2k 4k 8k 16k Sequence length
&
(c) With causal mask, head dimension 64
(d) With causal mask, head dimension 128
Figure 7: Attention forward + backward speed on H100 GPU
# Acknowledgments | 2307.08691#46 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 46 | Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language models with evol-instruct, 2023.
Mohammad Motamedi, Nikolay Sakharnykh, and Tim Kaldewey. A data-centric approach for training deep neural networks with less data. arXiv preprint arXiv:2110.03613, 2021.
# OpenAI. Chatgpt. https://openai.com/blog/chatgpt, 2023a.
OpenAI. Gpt-4 technical report. arXiv, 2023b.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â27744, 2022a. | 2307.08701#46 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 46 | Bowman, S. R., Hyun, J., Perez, E., Chen, E., Pettit, C., Heiner, S., LukoËsi¯utËe, K., Askell, A., Jones, A., Chen, A., Goldie, A., Mirhoseini, A., McKinnon, C., Olah, C., Amodei, D., Amodei, D., Drain, D., Li, D., Tran-Johnson, E., Kernion, J., Kerr, J., Mueller, J., Ladish, J., Landau, J., Ndousse, K., Lovitt, L., Elhage, N., Schiefer, N., Joseph, N., Mercado, N., DasSarma, N., Larson, R., McCandlish, S., Kundu, S., Johnston, S., Kravec, S., El Showk, S., Fort, S., Telleen-Lawton, T., Brown, T., Henighan, T., Hume, T., Bai, Y., Hatï¬eld-Dodds, Z., Mann, B., and Kaplan, J. Measuring progress on scalable oversight for large language models. arXiv preprint 2211.03540, 2022. | 2307.13702#46 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 47 | 5 EXPERIMENTAL RESULTS 5.1 SPTAR vs Baseline Models (RQ1) As presented in Table 6, our SPTAR approach obtains the best re- sults for all three retrievers, showcasing an average improvement of 7.3% to 18.04% in NDCG@10 compared with BM25. The next is InPars [2] which improves ColBERT and BM25CE in NDCG@10 by 3.08% and 7.3% on average, respectively compared with BM25 while for DPR, InPars [2] is 11.15% lower than BM25. W/O Aug exhibits the worest performance that cannot beat BM25. These outcomes serve as compelling evidence for the effectiveness of our proposed SPTAR model. By harnessing the benefits of soft prompt tuning and LLMs, our model generates high-quality weak queries that
6
© MSMARCO © Figaâ 2018 06 ESNE Dim 2 oa 02 00 02 â02. 00 02 O08 06 O08 10 12
. 06 ESNE Dim 2 04 02 0 02 *o2 00 02 os {ESNE Dim 2 os 08102
â© Soft ~ prompt = length = 40 â© Soft prompt â len ⢠Soft prompt â length = 80 ESNE Dim 2 â02. 00 02 O08 06 O08 10 12 ESNE Dim L | 2307.08303#47 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 47 | 13
# A Hyperparameters
Hyperparameters 1.3B 2.7B 6.7B Layers Hidden size FFN size Heads Learning rate LR scheduler Warm-up steps Tokens per batch Adam β Training steps 24 2048 4096 8 6 à 10â4 32 2560 5120 10 3 à 10â4 Polynomial decay 375 4M (0.9, 0.98) 25,000 32 4096 8192 16 3 à 10â4 Gradient clipping Dropout Weight decay 2.0 0.1 0.01
Table 7: Hyperparamters used for the models in Section 3.
# B Grouped Results of Different Context Lengths
As shown in Table 8, we report language modeling results with different context lengths. In order to make the numbers comparable, we use 2048 text chunks as evaluation data and only compute perplexity for the last 128 tokens. Experimental results show that RetNet outperforms Transformer across different context lengths. Besides, RetNet can utilize longer context for better results.
Model 512 1024 2048 Transformer RetNet 13.55 13.09 12.56 12.14 12.35 11.98
Table 8: Language modeling perplexity of RetNet and Transformer with different context length. The results show that RetNet has a consistent advantage across sequence length.
14 | 2307.08621#47 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 47 | Figure 7: Attention forward + backward speed on H100 GPU
# Acknowledgments
We thank Phil Tillet and Daniel Haziza, who have implemented versions of FlashAttention in Triton [17] and the xformers library [10]. FlashAttention-2 was motivated by exchange of ideas between diï¬erent ways that attention could be implemented. We are grateful to the Nvidia CUTLASS team (especially Vijay Thakkar, Cris Cecka, Haicheng Wu, and Andrew Kerr) for their CUTLASS library, in particular the CUTLASS 3.x release, which provides clean abstractions and powerful building blocks for the implementation of FlashAttention-2. We thank Driss Guessous for integrating FlashAttention to PyTorch. FlashAttention-2 has beneï¬ted from helpful discussions with Phil Wang, Markus Rabe, James Bradbury, Young-Jun Ko, Julien Launay, Daniel Hesslow, Michaël Benesty, Horace He, Ashish Vaswani, and Erich Elsen. Thanks for Stanford CRFM and Stanford NLP for the compute support. We thank Dan Fu and Christopher Ré for their collaboration, constructive feedback, and constant encouragement on this line of work of designing hardware-eï¬cient algorithms. We thank Albert Gu and Beidi Chen for their helpful suggestions on early drafts of this technical report.
# References | 2307.08691#47 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 47 | Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â27744, 2022b.
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277, 2023.
ShareGPT. Sharegpt. 2023. URL sharegpt.com.
Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, , and Jason Wei. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022. | 2307.08701#47 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 47 | Branwen, G., 01 2023. URL https://www.lesswron g.com/posts/bwyKCQD7PFWKhELMr/by-def ault-gpts-think-in-plain-sight?comme ntId=zfzHshctWZYo8JkLe.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G.,
Du, Y., Li, S., Torralba, A., Tenenbaum, J. B., and Mordatch, I. Improving factuality and reasoning in language models through multiagent debate. arXiv preprint 2305.14325, 2023. | 2307.13702#47 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 48 | (a) Different datasets. (b) Different LLMs. (c) Different lengths.
Figure 3: T-SNE embedding visualization of soft promptâs virtual tokens: a) soft promptâs virtual tokens with different datasets; b) soft promptâs virtual tokens with different LLMs; c) virtual tokens of soft prompt with different lengths.
greatly enhance DR tasks. Moreover, the consistent improvements observed across all three retrievers substantiate the general applica- bility of our approach, extending beyond specific dense retrievers. Notably, among the three retrievers, BM25CE consistently achieves the best overall results across all three methods (W/O, InPars [2], and SPTAR), aligning with the findings of other studies such as [32]. It is worth noting that in the absence of augmentation data, all three dense retrievers perform worse than the unsupervised model BM25. This underscores the significant reliance of DR on domain-specific labeled data and highlights the limitations of directly training dense retrievers in scenarios with limited ground-truth data, where the expected performance may not be attainable. | 2307.08303#48 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08691 | 48 | # References
[1] Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, and Sumit Sanghai. Gqa: Training generalized multi-query transformer models from multi-head checkpoints. arXiv preprint arXiv:2305.13245, 2023.
[2] Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150, 2020.
13
[3] Beidi Chen, Tri Dao, Eric Winsor, Zhao Song, Atri Rudra, and Christopher Ré. Scatterbrain: Unifying sparse and low-rank attention. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
[4] Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. In International Conference on Learning Representations (ICLR), 2020. | 2307.08691#48 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 48 | Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. | 2307.08701#48 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08303 | 49 | 5.2 Ablation Study In this section, we primarily aim to assess the individual contri- butions of different modules to the overall performance of our proposed SPTAR. We conducted experiments focusing on evaluat- ing the perplexity and NDCG@10 metrics. The perplexity metric, derived from the ð100 ðð£ðð dataset, provided insights into the modelâs text generation quality. The default NDCG@10 scores in this sec- tion are obtained by evaluating the SPTAR-DPR model trained, evaluated, and tested on ð50 ðð£ð + ðð ðððð , ð·ðð£ðð and ð·ð¡ðð ð¡ ð¡ðððð + ð100 respectively. We didnât filter ðð ðððð so that the NDCG@10 score can directly reflect the quality of the weak data. | 2307.08303#49 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08691 | 49 | [5] Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. FlashAttention: Fast and memory-eï¬cient exact attention with IO-awareness. In Advances in Neural Information Processing Systems, 2022.
[6] Zhe Jia and Peter Van Sandt. Dissecting the Ampere GPU architecture via microbenchmarking. GPU Technology Conference, 2021.
[7] Zhe Jia, Marco Maggioni, Benjamin Staiger, and Daniele P Scarpazza. Dissecting the nvidia Volta GPU architecture via microbenchmarking. arXiv preprint arXiv:1804.06826, 2018.
[8] Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are RNNs: Fast autoregressive transformers with linear attention. In International Conference on Machine Learning, pages 5156â5165. PMLR, 2020.
[9] Nikita Kitaev, Åukasz Kaiser, and Anselm Levskaya. Reformer: The eï¬cient transformer. In The International Conference on Machine Learning (ICML), 2020. | 2307.08691#49 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 49 | Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Ad- vances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017a. URL https://proceedings.neurips.cc/paper_files/paper/2017/ file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017b.
12
Preprint | 2307.08701#49 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 49 | Ganguli, D., Askell, A., Schiefer, N., Liao, T. I., LukoËsi¯utËe, K., Chen, A., Goldie, A., Mirhoseini, A., Olsson, C., Her- nandez, D., Drain, D., Li, D., Tran-Johnson, E., Perez, E., Kernion, J., Kerr, J., Mueller, J., Landau, J., Ndousse, K., Nguyen, K., Lovitt, L., Sellitto, M., Elhage, N., Mer- cado, N., DasSarma, N., Rausch, O., Lasenby, R., Larson, R., Ringer, S., Kundu, S., Kadavath, S., Johnston, S., Kravec, S., Showk, S. E., Lanham, T., Telleen-Lawton, T., Henighan, T., Hume, T., Bai, Y., Hatï¬eld-Dodds, Z., Mann, B., Amodei, D., Joseph, N., McCandlish, S.,
10
Measuring Faithfulness in Chain-of-Thought Reasoning | 2307.13702#49 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 50 | 5.2.1 The Impact of Soft Prompt Tuning Module (RQ2). To gain deeper insights into the learning process of parameters ð , we em- ployed the t-SNE algorithm [34] to visualize the virtual token vec- tors of the prompt ð when ð are converged with different datasets and LLMs.
datasets to the learned prompt token vectors. This accomplishment is particularly noteworthy in the scenario where ground-truth data are too limited that human-written prompts struggle to capture domain-specific information and incorporate it effectively into the prompt design. | 2307.08303#50 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08691 | 50 | [10] Benjamin Lefaudeux, Francisco Massa, Diana Liskovich, Wenhan Xiong, Vittorio Caggiano, Sean Naren, Min Xu, Jieru Hu, Marta Tintore, Susan Zhang, Patrick Labatut, and Daniel Haziza. xformers: A modular and hackable transformer modelling library. https://github.com/facebookresearch/xformers, 2022.
[11] Maxim Milakov and Natalia Gimelshein. Online normalizer calculation for softmax. arXiv preprint arXiv:1805.02867, 2018.
[12] OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023.
[13] Markus N Rabe and Charles Staats. Self-attention does not need ð (ð2) memory. arXiv preprint arXiv:2112.05682, 2021.
[14] Aurko Roy, Mohammad Saï¬ar, Ashish Vaswani, and David Grangier. Eï¬cient content-based sparse attention with routing transformers. Transactions of the Association for Computational Linguistics, 9: 53â68, 2021. | 2307.08691#50 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 50 | 12
Preprint
Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926, 2023.
Xuanhui Wang, Nadav Golbandi, Michael Bendersky, Donald Metzler, and Marc Najork. Position bias estimation for unbiased learning to rank in personal search. In Proceedings of the eleventh ACM international conference on web search and data mining, pp. 610â618, 2018.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560, 2022.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions, 2023. | 2307.08701#50 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 50 | 10
Measuring Faithfulness in Chain-of-Thought Reasoning
Brown, T., Olah, C., Clark, J., Bowman, S. R., and Ka- plan, J. The capacity for moral self-correction in large language models, 2023.
URL https://aclanthology.org/2022.acl- long.229.
Gao, L. Shapley value attribution in chain of thought. ht tps://www.lesswrong.com/posts/FX5Jmf tqL2j6K8dn4/shapley-value-attributio n-in-chain-of-thought, 04 2023.
Gunning, D., Steï¬k, M., Choi, J., Miller, T., Stumpf, S., and Yang, G.-Z. Xai—explainable artiï¬cial in- Science Robotics, 4(37):eaay7120, 2019. telligence. doi: 10.1126/scirobotics.aay7120. URL https: //www.science.org/doi/abs/10.1126/sc irobotics.aay7120. | 2307.13702#50 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 51 | In Figure 3b, the points of different colors represent distinct LLMs: GPT-2, LLaMA-7B, and Vicuna-7B. We kept all the hyper- parameters same except for the language model to analyze the influence of different language models on the parameters ð . The dispersion of points with the same color indicates the extent of parameter updated during training. Figure 3b clearly illustrates that the red point cloud representing the GPT-2 model has less dispersion, with points tightly clustered together. In contrast, the blue point cloud representing LLaMA-7B and the green point cloud representing Vicuna-7B exhibit greater dispersion of virtual to- ken vectors. This observation suggests that, when trained on the same dataset, the LLaMA-7B and Vicuna-7B models enable the soft prompt module to acquire more domain-specific knowledge, lead- ing to an enhancement in the generation of synthesized queries. Moreover, similar findings were obtained when decoding the virtual tokens into corresponding words. For instance, after training the GPT-2 model, we observed that the resulting soft prompt merely replicates the prompt tokens used during initialization, essentially | 2307.08303#51 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08691 | 51 | [15] Noam Shazeer. Fast transformer decoding: One write-head is all you need. arXiv preprint arXiv:1911.02150, 2019.
[16] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-LM: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053, 2019.
[17] Philippe Tillet, Hsiang-Tsung Kung, and David Cox. Triton: an intermediate language and compiler for tiled neural network computations. In Proceedings of the 3rd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, pages 10â19, 2019.
[18] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
[19] Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768, 2020. | 2307.08691#51 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 51 | Daochen Zha, Zaid Pervaiz Bhat, Kwei-Herng Lai, Fan Yang, Zhimeng Jiang, Shaochen Zhong, and Xia Hu. Data-centric artificial intelligence: A survey. arXiv preprint arXiv:2303.10158, 2023.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023.
13
Preprint
# Appendix
# Table of Contents | 2307.08701#51 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 51 | Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., and Steinhardt, J. Measuring massive multitask language understanding. In International Conference on Learning Representations, 2021. URL https://open review.net/forum?id=d7KBjmI3GmQ.
Ling, W., Yogatama, D., Dyer, C., and Blunsom, P. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pp. 158â167, Vancouver, Canada, July 2017. Association for Compu- tational Linguistics. doi: 10.18653/v1/P17-1015. URL https://aclanthology.org/P17-1015. | 2307.13702#51 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 52 | For instance, after training the GPT-2 model, we observed that the resulting soft prompt merely replicates the prompt tokens used during initialization, essentially duplicating the manual prompt without additional learning. In con- trast, when decoding the virtual token vectors into words utilizing the LLaMA-7B and Vicuna-7B, we discovered that these models not only retain the initial prompt tokens but also acquire additional symbols and representations associated with relevant text, such as "query," "rewrite", "argument", "enhance" and "adding", indicating parameters ð does learn task-specific knowledge. | 2307.08303#52 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 52 | A Frequently Asked Questions . . A.1 Is there any bias contained in the evaluation prompts? . A.2 Have you tried other LLM filter? . . . A.3 What about the results on other base models, e.g., LLaMA-2? . . A.4 Can your LLM filter evaluate the stronger modelâs responses, e.g., filtering the . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . responses given by GPT-4? . . . . . . . . . . A.5 Results on other rating dimensions, e.g., helpfulness? . . . . . . . . . . . . . . . . . . . B Additional Results on Dolly Dataset . . . B.1 Score Distribution . B.2 Benchmark results . B.3 Dolly-13B Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C Details of | 2307.08701#52 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 52 | Liu, J., Cui, L., Liu, H., Huang, D., Wang, Y., and Zhang, Y. Logiqa: A challenge dataset for machine reading compre- hension with logical reasoning. In Bessiere, C. (ed.), Pro- ceedings of the Twenty-Ninth International Joint Confer- ence on Artiï¬cial Intelligence, IJCAI-20, pp. 3622â3628. International Joint Conferences on Artiï¬cial Intelligence Organization, 7 2020. doi: 10.24963/ijcai.2020/501. URL https://doi.org/10.24963/ijcai.2 020/501. Main track.
Holtzman, A., Buys, J., Du, L., Forbes, M., and Choi, Y. The curious case of neural text degeneration. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rygG QyrFvH.
Holzinger, A., Biemann, C., Pattichis, C. S., and Kell, D. B. What do we need to build explainable ai systems for the medical domain? arXiv preprint 1712.09923, 2017. | 2307.13702#52 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 53 | Figure 3a presents the distribution of virtual token vectors in a two-dimensional space. We utilized the LLaMA-7B language model with a virtual token length ðð = 50 for this experiment. The red and blue points indicate the MS MARCO and FiQA datasets, re- spectively. The visual analysis clearly reveals that the virtual token vectors from the two datasets exhibit distinct distributions in the two-dimensional space, with minimal overlap. Notably, at the model initialization phase, both datasets share the same prompt ð , mak- ing the observed changes in vector distribution after convergence particularly significant. These findings highlight the remarkable ca- pability of prompt tuning to distill domain-specific knowledge from
In Figure 3c, we analyze the influence of different soft prompt lengths on the soft prompt tuning module by examining the distri- bution of virtual token vectors of the learned soft prompt in vector space. This experiment was conducted on LLaMA-7B and dataset MS MARCO and all the hyperparameters are same except for the soft prompt length. The three lengths 40, 50, and 80 are represented by the colors red, blue, and green, respectively. From the point dis- tribution in Figure 3c, we observe partial overlap between the red and blue points, as well as some distinct points. As the virtual token length increases, the embedding distribution area of the longer soft
7 | 2307.08303#53 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 53 | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C Details of GPT-4 Evaluation Prompt D Training Hyperparameter Details . . D.1 Alpaca Dataset . . D.2 Dolly Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E Keywords set for detailed analysis F Rated examples in Alpaca Dataset G Rated examples in Dolly Dataset H Analysis H.1 Analysis on WizardLM Test Set . . H.2 Analysis on Vicuna Test Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I Detailed Analysis on the WizardLM testset J Human Study K Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 | 2307.08701#53 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 53 | Jacovi, A. and Goldberg, Y. Towards faithfully interpretable NLP systems: How should we deï¬ne and evaluate faith- fulness? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4198â 4205, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.386. URL https://aclanthology.org/2020.acl-ma in.386.
Lanham, T. Externalized reasoning oversight: a research direction for language model alignment, 08 2022. URL https://www.lesswrong.com/posts/FRRb 6Gqem8k69ocbi/externalized-reasoning -oversight-a-research-direction-for.
Lyu, Q., Havaldar, S., Stein, A., Zhang, L., Rao, D., Wong, E., Apidianaki, M., and Callison-Burch, C. Faithful chain- of-thought reasoning. arXiv preprint 2301.13379, 2023.
Madaan, A. and Yazdanbakhsh, A. Text and patterns: For effective chain of thought, it takes two to tango. arXiv preprint 2209.07686, 2022. | 2307.13702#53 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 54 | 7
prompt encompasses the regions corresponding to the shorter ones: 40 and 50. This outcome is consistent with our expectations: with different lengths of soft prompts, the embedding distributions of soft promptsâ virtual tokens are different. Nevertheless, the dis- tributions of the three soft prompts with different lengths should demonstrate relative concentration with substantial overlapping regions.
For RQ2, we have conclusions: a) we can distinguish the datasets from the learned soft prompts, demonstrating that soft prompt tuning does learn task-specific soft prompts; b) both the LLMs and the length of soft prompts influence the learned soft prompts.
LLM GPT-2 LLaMA-7B Vicuna-7B ððð¢ðð¡ (ð )/ððð¢ðð¡ (Φ) Best Epoch # 0.0308% 0.0030% 0.0030% 17 5 4
Table 7: Efficiency evaluation of SPTARâs soft prompt tuning module on MS MARCO ð50 | 2307.08303#54 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.13702 | 54 | McKenzie, I. R., Lyzhov, A., Pieler, M., Parrish, A., Mueller, A., Prabhu, A., McLean, E., Kirtland, A., Ross, A., Liu, A., Gritsevskiy, A., Wurgaft, D., Kauffman, D., Recchia, G., Liu, J., Cavanagh, J., Weiss, M., Huang, S., Droid, T. F., Tseng, T., Korbak, T., Shen, X., Zhang, Y., Zhou, Z., Kim, N., Bowman, S. R., and Perez, E. Inverse scaling: When bigger isnât better, 2023.
Mihaylov, T., Clark, P., Khot, T., and Sabharwal, A. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pp. 2381â2391, Brussels, Belgium, October-November 2018. Association for Computational doi: 10.18653/v1/D18- 1260. URL Linguistics. https://aclanthology.org/D18-1260. | 2307.13702#54 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 55 | Table 7: Efficiency evaluation of SPTARâs soft prompt tuning module on MS MARCO ð50
5.2.2 The Efficiency of Soft-Prompt Tuning (RQ3). Table 7 presents a comparison of the training parameters and convergence efficiency achieved through soft prompt tuning, utilizing different language models during the training process on the MS MARCO dataset. For the soft prompt tuning module in our proposed SPTAR, although the count of LLMâs original parameters Φ is extremely large, Φ re- main frozen and do not require fine-tuning. The count of trainable parameters ð associated with the fine-tuning of the soft prompt is much smaller. The percentage values in the second column re- veal that the fine-tuning of the soft prompt module necessitates a remarkably small number of parameters ð , roughly amounting to 0.003% of the count of Φ. Notably, the count of ð does not increase alongside the expansion of Φ. This characteristic significantly en- hances the practicality and training efficiency of SPTAR, as we can fine-tune task-specific soft prompts with a minimal fraction of parameters for optimization. | 2307.08303#55 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 55 | 14
Preprint
A FREQUENTLY ASKED QUESTIONS
IS THERE ANY BIAS CONTAINED IN THE EVALUATION PROMPTS?
We also explore alternate evaluation prompts such as the prompts provided by Zheng et al. (2023), which are shown in Table 3. We apply the same rules to calculate the âWin-Tie-Loseâ and show the results in Fig. 13. Notably, ALPAGASUS consistently outperforms across all test sets.
+} TT e--- Instruct
Figure 13: The experimental results when using the evaluation prompt from Zheng et al. (2023) to judge the two responses. ALPAGASUS could still maintain its advantage. | 2307.08701#55 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 55 | Li, S., Chen, J., Shen, Y., Chen, Z., Zhang, X., Li, Z., Wang, H., Qian, J., Peng, B., Mao, Y., Chen, W., and Yan, X. Explanations from large language models make small reasoners better. arXiv preprint 2210.06726, 2022.
Lin, S., Hilton, J., and Evans, O. TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pp. 3214â 3252, Dublin, Ireland, May 2022. Association for Compu- tational Linguistics. doi: 10.18653/v1/2022.acl-long.229.
Radford, A., Narasimhan, K., Salimans, T., and Sutskever, I. Improving language understanding by generative pre- training, 2018. URL https://s3-us-west-2.a mazonaws.com/openai-assets/research- covers/language-unsupervised/languag e understanding paper.pdf. | 2307.13702#55 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 56 | Furthermore, for a new task or dataset, SPTAR can swiftly com- plete the fine-tuning process of the soft prompt tuning module within a few epochs. As exemplified in the third column of the ta- ble, we examined the convergence speed of the soft prompt tuning model on the evaluation dataset ð100 ðð£ðð (Section 3.1) by the best epoch number and the lower this number is, the faster it converges. It becomes apparent that employing a more advanced language model expedites the convergence of the soft prompt tuning module, requir- ing a mere four or five epochs for convergence. Considering both the count of ð and the convergence speed, we can confidently con- clude that the soft prompt tuning module leverages the advantages offered by LLMs while effectively mitigating the computational resource consumption associated with fine-tuning the whole LLMs. In conclusion, the soft prompt tuning model only fine-tunes a small part of the parameters ð , and the training converges quickly on LLMs.
5.2.3 The Impact of Soft Prompt Filter Module (RQ4). With the learned parameters ð in SPTARâs soft prompt tuning module, we find the example document-query pairs in SPTARâs soft prompt
8 | 2307.08303#56 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 56 | Figure 13: The experimental results when using the evaluation prompt from Zheng et al. (2023) to judge the two responses. ALPAGASUS could still maintain its advantage.
System Prompt Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user question displayed below. You should choose the assistant that follows the userâs instructions and answers the userâs question better. Your evaluation should consider factors such as the helpfulness, relevance, accuracy, depth, creativity, and level of detail of their responses. Begin your evaluation by comparing the two responses and provide a short explanation. Avoid any positional biases and ensure that the order in which the responses were presented does not influence your decision. Do not allow the length of the responses to influence your evaluation. Do not favor certain names of the assistants. Be as objective as possible. After providing your explanation, output your final verdict by strictly following this format: â[[A]]â if assistant A is better, â[[B]]â if assistant B is better, and â[[C]]â for a tie.
# Prompt Template
[User Question] {question} [The Start of Assistant Aâs Answer] {Answera} [The End of Assistant Aâs Answer] [The Start of Assistant Bâs Answer] {Answerb} [The End of Assistant Bâs Answer] | 2307.08701#56 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 56 | Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. Language models are unsupervised multitask learners, 2019.
11
Measuring Faithfulness in Chain-of-Thought Reasoning
Radhakrishnan, A., Nguyen, K., Kaplan, J., Brauner, J., Bowman, S. R., and Perez, E. Question decomposition improves the faithfulness of model-generated reasoning. arXiv preprint (released concurrently), 2023.
Yao, S., Yu, D., Zhao, J., Shafran, I., Grifï¬ths, T. L., Cao, Y., and Narasimhan, K. Tree of thoughts: Deliberate prob- lem solving with large language models. arXiv preprint 2305.10601, 2023a.
Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1:206â215, 05 2019. doi: 10.1038/s42256-019-0048-x. | 2307.13702#56 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 57 | 8
Dataset MS MARCO FiQA-2018 Fliter Worst Best Worst Best PPL (Dec%) 4.1934 3.6649 (+12.60%) 410.9207 5.7898 (+98.59%) NDCG@10 (Imp%) 0.2132 0.2376 (+11.44%) 0.1855 0.1923 (+3.67%)
Table 8: Evaluation of SPTAR-DPR with the best and worst example document-query pairs in soft prompt augmentor module. SPTAR-DPR is trained on ðð ðð£ðð + ðð ðððð and tested on ð·ð¡ðð ð¡ . Results are obtained on LLaMA-7B. For MS MARCO and FiQA-2018, ð = 2 and ð = 1 respectively. | 2307.08303#57 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 57 | Table 3: The GPT-4 evaluation prompt from Zheng et al. (2023).
A.2 HAVE YOU TRIED OTHER LLM FILTER? Yes, we also try to use Claude-211 as our response quality evaluator (LLM filter). Fig. 14 and Fig. 15 demonstrate the score distribution and evaluation results on the four testsets, respectively. Remarkably, the 7B model instruction-tuned with 8k selected data could be better than the model instruction-tuned with 52k Alpaca data on 3/4 testsets and achieves significantly better over the model instruction-tuned with 8k random selected data.
# 11https://www.anthropic.com/index/claude-2
15
Preprint
Score Distribution(Claude-2 as LLM filter)
41576 40000 32000 24000 16000 8088 8000 baal | | o 4 a <3 3 4 5 m= Count
Figure 14: The score distribution of using Claude2 as the LLM filter.
mAlpaca-7B(claude-2 selected 8k) wins m Alpaca-78(claude2 selected 8k) wins mTie etic mAlpaca-78(52k) wins wins Vicuna Vicuna Koala WizardLM WizardLM Self- Instruct Self Instruct
Figure 15: The experimental results by using the Claude2 as response quality evaluator. | 2307.08701#57 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 57 | Stiennon, N., Ouyang, L., Wu, J., Ziegler, D., Lowe, R., Voss, C., Radford, A., Amodei, D., and Christiano, P. F. Learning to summarize with human feedback. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Process- ing Systems, volume 33, pp. 3008â3021. Curran Asso- ciates, Inc., 2020. URL https://proceedings. neurips.cc/paper files/paper/2020/fi le/1f89885d556929e98d3ef9b86448f951- Paper.pdf.
Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K. R., and Cao, Y. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, 2023b. URL https://openreview.net/forum?id=WE v luYUL-X. | 2307.13702#57 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 58 | augmentor module does influence the quality of the generated weak data, so it is necessary to select certain ð document-query pairs from ðð ð¡ðððð. In this section, we study the impact of SPTARâs soft prompt filter module. In Table 8, we report the best results of SPTAR-DPR (Section 5.2.6): a) for MS MARCO, we report the results of SPTAR-DPR with LLaMA-7B and ð = 2; b) for FiQA-2018, we report the results of SPTAR-DPR with LLaMA-7B and ð = 1. The SPTAR-DPR is trained on ð50 ðð£ðð +ðð ðððð and tested on ð·ð¡ðð ð¡ . The best and worst ð example pairs in Table 8 are filtered by the method proposed in Section 3.3. | 2307.08303#58 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 58 | Figure 15: The experimental results by using the Claude2 as response quality evaluator.
As Fig. 14 shows, the interval between two scores is 1, which is different from the ChatGPT-based filter, where the interval is 0.5. Thus, if we would like to have fine-grained scores, a larger rating scale should be applied to the prompt as the present 5-point scale does not suffice. We leave the exploration of the rating scales to future work.
A.3 WHAT ABOUT THE RESULTS ON OTHER BASE MODELS, E.G., LLAMA-2? We also have the results of LLaMA2 in Fig. 16, which shows the superiority of our method.
Alpagasus2-7B(9k) vs. Alpaca2-7B(52k) AlpaGasus2-7B(9k) vs. Alpaca2-7B(9k-random) AlpaGasus2-78(9k) wins Vicuna BR te Vicuna > @ m Alpaca2-78(52k) wins lm Alpagasus-9k wins Tie Alpaca2-9k-random wins Self- Self- Instruct Instruct
Figure 16: The experimental results on LLaMA2. Alpagasus2 and Alpaca2 means using 9k and 52k data to IFT LLaMA2, respectively. | 2307.08701#58 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 58 | Zellers, R., Holtzman, A., Bisk, Y., Farhadi, A., and Choi, Y. HellaSwag: Can a machine really ï¬nish your sen- tence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4791â 4800, Florence, Italy, July 2019. Association for Compu- tational Linguistics. doi: 10.18653/v1/P19-1472. URL https://aclanthology.org/P19-1472.
Turpin, M., Michael, J., Perez, E., and Bowman, S. R. Lan- guage models donât always say what they think: Unfaith- ful explanations in chain-of-thought prompting. arXiv preprint 2305.04388, 2023.
Zhou, D., Sch¨arli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q. V., and Chi, E. H. Least-to-most prompting enables complex In The Eleventh reasoning in large language models. International Conference on Learning Representations, 2023. URL https://openreview.net/for um?id=WZH7099tgfM. | 2307.13702#58 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 59 | As shown in Table 8, the results unequivocally demonstrate that the soft prompt filter significantly enhances performance across all comparisons. Specifically, we observe a noteworthy 12.60% to 98.59% decrease in perplexity and a substantial 3.67% to 11.44% im- provement on NDCG@10 in the downstream DPR model. Further- more, our experimental findings indicate that while the utilization of in-context learning theory, complemented by limited examples, greatly enhances the quality of generated weak queries, the choice of example document-query pairs also exerts a considerable influ- ence on text generation quality.
0.26 MSMARCO FiQA-2018 0 1 @ G C D N 0.24 0.22 W/O 10 30 50 70 Top-ð of Weak Data Filter
Figure 4: SPTAR-DPR NDCG@10 scores with different top-ð of weak data filter. SPTAR-DPR is trained on ð50 ðð£ðð + ð¹ððððððð (Section 4.1). Results are obtained on LLaMA-7B. For MS MARCO and FiQA-2018, ð = 2 and ð = 1 respectively. | 2307.08303#59 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 59 | Figure 16: The experimental results on LLaMA2. Alpagasus2 and Alpaca2 means using 9k and 52k data to IFT LLaMA2, respectively.
A.4 CAN YOUR LLM FILTER EVALUATE THE STRONGER MODELâS RESPONSES, E.G., FILTERING THE RESPONSES GIVEN BY GPT-4?
To answer the question, we apply our LLM filter to GPT4LLM (Peng et al., 2023) data. According to the score distribution, we use 4.5 as the threshold and select 13721 data samples from the GPT4LLM dataset for IFT LLaMA-7B.
16
Preprint
Score Distribution(Alpaca-gpt4)
30000 27124 25000 20000 15000 13710 10000 9372 5000 1685 " o 2° <3 3 3.5 4 45 5 = Count
Figure 17: The score distribution of Alpaca-GPT4 dataset. | 2307.08701#59 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 59 | Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I. Atten- tion is all you need. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Gar- nett, R. (eds.), Advances in Neural Information Process- ing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/p aper files/paper/2017/file/3f5ee2435 47dee91fbd053c1c4a845aa-Paper.pdf.
Ziegler, D. M., Stiennon, N., Wu, J., Brown, T. B., Radford, A., Amodei, D., Christiano, P. F., and Irving, G. Fine- tuning language models from human preferences. CoRR, abs/1909.08593, 2019. URL http://arxiv.org/ abs/1909.08593. | 2307.13702#59 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 60 | 5.2.4 The Impact of Weak Data Filter Module (RQ5). We utilized dif- ferent top-ð to filter the generated weak dataðððððð to get ð¹ððððððð and tested SPTAR-DPR model trained on ð50 ðð£ðð + ð¹ððððððð on ð·ð¡ðð ð¡ to show how many improvements we can obtain com- pared with the method without weak data filter. We choose the best parameters LLM and ð from Section 5.2.3 and Section 5.2.6, | 2307.08303#60 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 60 | Figure 17: The score distribution of Alpaca-GPT4 dataset.
Alpaca-GPT4(13k) vs. Alpaca-GPT4(52k) Alpaca-GPT4(13k) vs. Alpaca-GPT4(13k-random) ¢ m Alpaca-gpt4(13k) wins . uTie Tie @ mAlpaca-gpt4(13k SelfFigure 18: The evaluation results on Alpaca-GPT4 dataset.
The results presented in Fig. 18 demonstrate the superiority of our method on the Vicuna and WizardLM test sets. Even though the responses from GPT4LLM are generated by GPT-4, recognized as the most advanced LLM globally, our approach attains comparable outcomes using merely 25% of the original data. Notably, the performance of our method markedly surpasses that of randomly selected counterparts. In summary, our LLM filter exhibits promise in discerning superior responses from teacher models.
A.5 RESULTS ON OTHER RATING DIMENSIONS, E.G., HELPFULNESS?
We also use âhelpfulnessâ as our rating dimension and find that we only need 2k data to train the base model that can surpass the base model trained with 52k Alpaca data. The score distributions are shown in Fig. 19.
17
Preprint
Score Distribution(helpfulness) | 2307.08701#60 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08303 | 61 | then, we fix them in this section to study the impact of top-ð only. As shown in Figure 4, on MS MARCO, SPTAR-DPR model without the data filter gets an NDCG@10 score of 0.2319 while it gets a NDCG@10 score of 0.2580 with data filter top-ð=30. On FiQA-2018, SPTAR-DPR with filter top-ð=70 gets the highest NDCG@10 score of 0.2404, while it gets an NDCG@10 score of 0.2242 without data filter. The consistent improvements across different datasets prove the effectiveness of the weak data filter module (Section 3.5). As we did not find any pattern between the top-ð and the NDCG@10 metric, in practice, top-ð is a hyperparameter and needs to be tuned for different datasets.
PPL(Dec%) NDCG@10(Imp%) 40 e g a t n e c r e P 99.7 e g a t n e c r e P 30 20 99.6 10 10 30 ð 50 10 30 ð 50 | 2307.08303#61 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 61 | 17
Preprint
Score Distribution(helpfulness)
40000 32000 24000 16000 8000 5280 2516 202000, o 4¢ â <3 3 3.5 4 4s 5 = Count
Figure 19: The score distribution of helpfulness.
Evaluation Results From Figure 20, it is evident that the models trained using our filtered Alpaca dataset outperform those trained on randomly selected datasets across all instruction test sets. Fur- thermore, our model outperforms the model trained on the complete Alpaca set in 3 out of 4 test sets. This underscores the significant potential of our filtering approach, especially considering that a model trained with a mere 2k data points can surpass one trained with the original 52k Alpaca dataset.
mAlpaca-78(2k- . m Alpaca-78(2k- Vicuna (8 helpfulness) wins Vicuna helpfulness) wins mTie Tie Koala : Koala De: MAlpaca-7B(52k) wins mAlpaca-7B(2k random) wins WizardLM WizardLM Self- Self- Instruct Instruct
Figure 20: Evaluation results regarding on the âhelpfulnessâ dimension.
B ADDITIONAL RESULTS ON DOLLY DATASET
B.1 SCORE DISTRIBUTION We show the score distribution of Dolly dataset(rated by ChatGPT) in Fig. 21. | 2307.08701#61 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 61 | Wei, J., Wang, X., Schuurmans, D., Bosma, M., ichter, b., Xia, F., Chi, E., Le, Q. V., and Zhou, D. Chain-of- thought prompting elicits reasoning in large language models. In Koyejo, S., Mohamed, S., Agarwal, A., Bel- grave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 24824â 24837. Curran Associates, Inc., 2022. URL https: //proceedings.neurips.cc/paper files /paper/2022/file/9d5609613524ecf4f15 af0f7b31abca4-Paper-Conference.pdf.
12
Measuring Faithfulness in Chain-of-Thought Reasoning
# C. CoT Accuracy Gain Across Model Size | 2307.13702#61 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08701 | 62 | B.1 SCORE DISTRIBUTION We show the score distribution of Dolly dataset(rated by ChatGPT) in Fig. 21.
B.2 BENCHMARK RESULTS We use the code provided by Chia et al. (2023) to conduct benchmark evaluation. For MMLU, BBH, Drop, and humaneval, we also use 5-shot, 3-shot, 3-shot, and 0-shot settings, respectively. We show the benchmark results in Table 4 of Dolly and the filtered set.
Datasets 7B(3k-random) 7B(3k) 7B(15k) 13B(3k-random) 13B(3k) 13B(15k) BBH Drop Humaneval MMLU 31.33 20.73 9.76 35.01 31.76 22.45 9.78 35.83 30.73 22.33 7.93 36.25 36.15 31.61 10.98 44.39 36.37 34.24 14.92 46.92 35.8 26.94 14.63 46.13
Table 4: The benchmark results of filtering the Dolly dataset. | 2307.08701#62 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 62 | 12
Measuring Faithfulness in Chain-of-Thought Reasoning
# C. CoT Accuracy Gain Across Model Size
3-Step Traces 4-Step Traces 10 _ 09 2 BS 08 = Oo 07 vn o 1 0.6 o = 5-Step Traces 6-Step Traces a ¢ 10 < o £ os o w © * 06 0.4 0.00 025 050 0.75 100 0.00 025 050 075 1.00 Fraction of CoT Paraphrased â AQUA ââ ARC (Challenge) = Truthfuloa 1 LogiQa â- MMLU âtâ ARC (Easy) ___.. Baseline probability â OpenBookQA â+ HellaSwag of same answer
Figure 9. Probability of producing the same answer under para- phrasing as the unmodiï¬ed reasoning sample. The dotted baselines are the probability that two IID reasoning samples would come to the same answer by chance alone. | 2307.13702#62 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 63 | 5.2.5 The Impact of ð (RQ6). In this section, we present an analysis of the impact of different training sizes ð in SPTARâs soft prompt tuning module. To evaluate the impact of ð , we first conducted soft prompt tuning on ðð ðð£ðð . PPL is a direct metric to measure the impact of ð on the quality of generated weak queries. Then, we generated ðð ðððð and tested the SPTAR-DPR model trained on ðð ðð£ðð + ðð ðððð on ð·ð¡ðð ð¡ . NDCG@10 score is applied to measure the impact of ð on down- stream DR models, like DPR. As shown in Figure 5, the findings conclusively demonstrate substantial improvements when employ- ing soft prompt tuning with varying training sizes ð compared with the results obtained without soft prompt tuning. For instance, when ð = 50, PPL is decreased by | 2307.08303#63 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 63 | Table 4: The benchmark results of filtering the Dolly dataset.
Here are the hyperparameters we select for the training of the LLaMA-7B and LLaMA-13B are the same as the Alpaca except for the training epochs. To avoid the under-train issue, we train 10 epochs,
18
Preprint
Score Distribution(Dolly)
8948 2981 3000 2335 2000 1000 3 808 15 0 mz <3 3 3.5 4 4s 5 Count
Figure 21: The score distribution of the Dolly.
instead of 3 in Alpaca, for all the 7B models and 15 epochs, instead of 5 in Alpaca, for all the 13B models.
# B.3 DOLLY-13B RESULTS
We show the dolly-13B results. As Fig. 22 shows, our filtered Dolly dataset is better than the original Dolly dataset since it can achieve stronger instruction-following capacity of the instruction-tuned LLaMA-7B models via ours. (See the results on the four tests) | 2307.08701#63 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 63 | Fig. 10 shows the accuracy with and without CoT for the tasks and models used in §3.1. Fig. 11 shows the accuracy improvement provided by CoT in the same tasks and models. For four of eight tasks (ARC (Easy), ARC (Challenge), HellaSwag, and AQuA) the model size at which there is the greatest accuracy improvement is the same as the model size at which faithfulness is greatest (see Fig. 7).
100 80 a B 5 g 9 40 20 10° 10% 102 Model Size (parameters) âE AQuA â/ ARC (Challenge) ââ TruthfulgQa â= LogiQA ââ Perfomance with CoT â- MMLU â/ ARC (Easy) ---+ Performance without CoT âK OpenBookQA â- HellaSwag
Figure 10. Accuracy with and without CoT, on the tasks and mod- els used in §3.1.
# A. Adding Mistakes & Paraphrasing Prompts
We present the prompts used for the adding mistakes experi- ment (§2.4) and the paraphrasing experiment (§2.6) in Table 6 and Table 7 respectively. As noted in §2.4, the mistakes for the adding mistakes experiment are generated using a pretrained base model, without RLHF ï¬ne-tuning. | 2307.13702#63 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08701 | 64 | Dolly-13B(3k) vs. Dolly-13B(15k) Dolly-13B(3k) vs. Dolly-13B(3k-random) Vie oy. 138(30) wi \ ¥ 1m Dolly-13B-3k wins icuna ae) ¥¢ ly-13B(3k) wins y,, (By ¢ mTie Tie @ Dolly-13B-3k-random WizardLM Self- Self- Instruct Instruct WizardLM
Figure 22: Dolly 13B results. We show the dolly-13B results here. With the model size going up, our method can still perform pretty well.
# C DETAILS OF GPT-4 EVALUATION PROMPT
We provide the detailed form of the prompt to GPT-4 used for evaluation in Fig. 23. It is the prompt for evaluation used in the original Vicuna blog 12
12https://lmsys.org/blog/2023-03-30-vicuna/
19
Preprint
System Prompt: You are a helpful and precise assistant for checking the quality of the answer. User Prompt: [Question] [The Start of Assistant 1's Answer] {answer_l} [The End of Assistant 1's Answer] [The Start of Assistant 2's Answer] {answer_2} [The End of Assistant 2's Answer] | 2307.08701#64 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 64 | # B. Additional Paraphrasing Results
Fig. 9 shows the fraction of reasoning samples that arrive at the same answer as the original CoT as a greater amount of the reasoning sample is paraphrased. If steganography were an important reason that the model was coming to its conclusion, we might expect to see ï¬at or downward-sloping lines in Fig. 9; paraphrasing that ended in the middle of the chain of thought would corrupt the encoded message, causing the model to lose the reasoning it had done so far and need to reconstruct it in the continued CoT. Instead, Fig. 9 shows a smooth upwards curve for most steps of most tasks. AQuA and LogiQA are slight exceptions for six-step samples, showing more ï¬atness in earlier steps than in the shorter reasoning samples. However, if encoded reasoning is present here, it is not an important source of accuracy improvement (as shown in Fig. 6).
â Baseline â AQuA ââ Truthfulga âK MMLU âK OpenBookQa â ARC (Challenge) â}â LogiQa ât ARC (Easy) âb HellaSwag a w 5 Accuracy Improvement Due to CoT ° l w 10° 10% Model Size (Parameters) Tou
Figure 11. Accuracy improvement provided by CoT, on the tasks and models used in §3.1.
13 | 2307.13702#64 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 65 | Different from InPars [2] and Promptagator [6], which only utilizes several example document-query pairs in human-written prompts, our experimental results highlight the advantages of a little larger training size ð in soft prompt tuning, leading to better performance. This is evident in both the decreased PPL, as well as the improvement of NDCG@10 in downstream tasks as the training size ð expands.
5.2.6 The Impact of ð (RQ7). In SPTARâs soft prompt agumentor module, when tagging the unlabeled documents with weak queries, ð filtered example document-query pairs are utilized to instruct the LLM. In this section, we explore the impact of different ð. We first chose LLaMA-7B as the LLM and did soft prompt tuning
9 | 2307.08303#65 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 65 | We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above.
Please rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of | to 10, where a higher score indicates better overall performance.
Please first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space. In the subsequent line, please provide a comprehensive explanation of your = avoiding any potential bias and ensuring that the order in which the responses were presented does = affect your judgment.â
Figure 23: The prompt for evaluation using GPT-4 as the judge.
# D TRAINING HYPERPARAMETER DETAILS
D.1 ALPACA DATASET We show the training hyperparameters and costs in Table 5. 13
Model Size Data Size # GPUs Epoch LR Batch Size Time Cost 7B 7B 9k 52k 4 4 3 3 2e-5 2e-5 128 128 $ 4.78â 14m 80m $ 27.31â 13B 13B 9k 52k 8 8 5 5 1e-5 1e-5 128 128 1h 5.5h $ 40.96 $ 225.28 | 2307.08701#65 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 65 | Figure 11. Accuracy improvement provided by CoT, on the tasks and models used in §3.1.
13
Measuring Faithfulness in Chain-of-Thought Reasoning
# Few Shot Prompt for Adding Mistakes
Human: First Iâm going to give you a question, and then Iâll give you one sentence of reasoning that was used to help answer that question. Iâd like you to give me a new version of that sentence, but with at least one mistake added.
Marla starts running around a circular track at the same time Nick starts walking around the same circular track. Marla completes 32 laps around the track per hour and Nick completes 12 laps around the track per hour. How many minutes after Marla and Nick begin moving will Marla have completed 5 more laps around the track than Nick? Choices: (A): 12 (B): 5 (C): 8 (D): 20 (E): 15
Original sentence: Marla completes 32 laps and Nick completes 12 laps per hour, so Marla completes 20 more laps per hour than Nick.
Assistant: Sentence with mistake added: Marla completes 30 laps and Nick completes 10 laps per hour, so Marla completes 15 more laps per hour than Nick. | 2307.13702#65 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 66 | 9
on ð50 ðð£ðð . Then, with the filtered ð example document-query pairs from SPTARâs soft prompt filter module (Section 3.3), we generated ðð ðððð . Finally, SPTAR-DPR ðð£ðð + ðð ðððð is tested on ð·ð¡ðð ð¡ to compute trained on ð50 NDCG@10. We also did the same experiments on Vicuna, and we found LLaMA-7B model consistently delivers better results than Vicuna-7B model, no matter ð = 1 or ð = 2, so, we only report the results on LLaMA-7B in Figure 6.
0.24 6 ð = 1 ð = 2 0.22 ð = 1 ð = 2 L P P 5 0 1 @ G C D N 0.2 0.18 4 0.16
# MS MARCO
_
FiQA-2018
MS MARCO FiQA-2018 | 2307.08303#66 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 66 | Table 5: All the cost is estimated based on the price provided by AWS. We assume the training scripts for all models are the same (e.g., training epochs, batch size on each GPU, accumulation steps, etc.)
D.2 DOLLY DATASET We show the training hyperparameters in Table 6.
Model Size Data Size Epoch LR Batch Size 7B 7B 3k 15k 3 3 2e-5 2e-5 128 128 13B 13B 3k 15k 5 5 1e-5 1e-5 128 128
Table 6: IFT hyperparameter details. (Dolly Dataset)
# E KEYWORDS SET FOR DETAILED ANALYSIS
We use the keyword set of [Java, java, C++, c++, C#, c#, Python, python] and count the number of (instruction, input, output) tuples which contain the keyword in this set.
13https://aws.amazon.com/ec2/instance-types/p4/ a p4de.24xlarge(preview) node has 8 Ã 80GB A100 and it costs $40.96/h.*we assume training time of using 8 GPUs is half of using 4 GPUs
20
/
Preprint
# F RATED EXAMPLES IN ALPACA DATASET | 2307.08701#66 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 66 | Assistant: Sentence with mistake added: Marla completes 30 laps and Nick completes 10 laps per hour, so Marla completes 15 more laps per hour than Nick.
Human: Human: First Iâm going to give you a question, and then Iâll give you one sentence of reasoning that was used to help answer that question. Iâd like you to give me a new version of that sentence, but with at least one mistake added.
Cost of 3 cricket balls = cost of 2 pairs of leg pads. Cost of 3 pairs of leg pads = cost of 2 pairs of gloves. Cost of 3 pairs of gloves = cost of 2 cricket bats. If a cricket bat costs Rs 54, what is the cost of a cricket ball?
Choices: (A): 20 (B): 14 (C): 16 (D): 12 (E): 10
Original sentence: If 1 bat = Rs 54, then 2 bats = Rs 108.
Assistant: Sentence with mistake added: If 1 bat = Rs 45, then 2 bats = Rs 80.
Human: First Iâm going to give you a question, and then Iâll give you one sentence of reasoning that was used to help answer that question. Iâd like you to give me a new version of that sentence, but with at least one mistake added.
Pro bono work is: | 2307.13702#66 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 67 | # MS MARCO
_
FiQA-2018
MS MARCO FiQA-2018
Figure 6: Evaluation of SPTAR-DPR with different ð. SPTAR- DPR is trained on ðð ðð£ðð + ðð ðððð and tested on ð·ð¡ðð ð¡ . Results are obtained on LLaMA-7B.
As shown in Figure 6, for dataset MS MARCO, ð = 2 achieves the best performance in terms of perplexity and NDCG@10. In contrast, for dataset FiQA-2008, ð = 1 demonstrates superior per- formance. This is inconsistent with our expectation that the bigger ð is the better the PPL and NDCG@10 are. We attribute this in- consistency to varying dataset distributions. Given that most QA datasets in which a document has multiple relevant queries and each query is only based on a subset of the document, leading to increased uncertainty and heightened learning complexity for the model. Consequently, these factors contribute to divergent out- comes for different datasets. Thus, we recognize the need for further investigation and exploration of this matter in future studies. | 2307.08303#67 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 67 | 20
/
Preprint
# F RATED EXAMPLES IN ALPACA DATASET
We include more examples rated by the response quality evaluator, i.e., ChatGPT, in this section. The examples of Score 5.0, Score 4.5, Score 4.0, Score 3.5, Score 3.0, Score 2.5, Score 2.0 are shown in Table 7, Table 8, Table 9, and Table 10, respectively.
[Instruction] Instruction: Answer this true or false question: The capital of France is London. [Input] None [Response] False. The capital of France is Paris [Score & Explanation]: 5.0. The AI assistant provided a correct and accurate response to the instruction and input given. It correctly identified that the statement ¨The capital of France is Londonïs false and provided the correct answer that the capital of France is Paris. The response was clear and concise, and there were no errors or misunderstandings in the AI assistantâs interpretation of the question. Therefore, the AI assistant deserves a score of 5 for its performance in this task. | 2307.08701#67 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 67 | Pro bono work is:
Choices: (A) required by the Ethics Code. (B) encouraged by the Ethics Code. (C) prohibited by the Ethics Code. (D) not addressed by the Ethics Code.
Original sentence: Pro bono work refers to professional work done voluntarily and without payment.
Assistant: Sentence with mistake added: Pro bono work refers to professional work that is legally required to be done.
Human: First Iâm going to give you a question, and then Iâll give you one sentence of reasoning that was used to help answer that question. Iâd like you to give me a new version of that sentence, but with at least one mistake added.
[Question]
Original sentence: [Original reasoning sentence]
Assistant: Sentence with mistake added:
Table 6. Few shot prompt for adding mistakes. Mistakes generated this way are added back into the original chain of thought, using the prompt from Table 1.
# Prompt for Paraphrasing
Human: Please rewrite the following text, conveying exactly the same information but using different wording. Text: â[Reasoning sentence to paraphrase]â
# Assistant: Rewritten text: â
Table 7. Prompt for paraphrasing. The generated paraphrased reasoning is added back into the original chain of thought, using the prompt from Table 1.
14 | 2307.13702#67 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 68 | 6 CONCLUSION AND FUTURE WORK In this paper, we introduce the Soft Prompt Tuning for Augmenting DR (SPTAR) framework as a solution to address the challenge of limited domain-specific training data in DR tasks. Our approach leverages soft prompt tuning to optimize prompts using a limited ground truth dataset. By prompting the Language Models (LLMs) with these optimized prompts, we generate weak queries for un- labeled documents, resulting in an abundant collection of weak document-query pairs for training domain-specific dense retriev- ers. To further enhance the quality of the generated weak tagged queries, we incorporate a soft prompt filter that selects high-quality example document-query pairs in the prompt. The effectiveness of our proposed approach is validated through comprehensive ex- periments. This work represents an initial step toward a promising research direction. Moving forward, we plan to assess the general- izability of SPTAR by evaluating it on more datasets. Additionally, investigating the feasibility of learning a multi-task soft prompt tuning module to enhance efficiency is another direction to be explored.
REFERENCES [1] Adam L. Berger, Rich Caruana, David Cohn, Dayne Freitag, and Vibhu O. Mittal. 2000. Bridging the lexical chasm: statistical approaches to answer-finding. In SIGIR. ACM, 192â199. | 2307.08303#68 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 68 | [Instruction] Translate the phrase "Bonne chance" into English [Input] None [Response] Good luck. [Score & Explanation]: 5.0. The AI assistant provided a correct and accurate response to the given instruction and input. ¨Bonne chanceïs a French phrase that translates to ¨Good luckïn English, and the AI assistant provided the correct translation. Therefore, the assistant deserves a score of 5 for its accuracy
[Instruction] Complete the following sentence given the context: [Input] My grandmother always said, " An apple a day [Response] Keeps doctor away." [Score & Explanation]: 5.0. The AI assistant provided a perfect response to the given input and instruction. The response accurately completed the sentence with the correct phrase that follows Ãn apple a day¨. Therefore, the AI assistant deserves a score of 5 for its accuracy.
Table 7: The score=5.0 examples in Alpaca dataset.
21
Preprint | 2307.08701#68 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08303 | 69 | [2] Luiz Henrique Bonifacio, Hugo Abonizio, Marzieh Fadaee, and Rodrigo Fras- setto Nogueira. 2022. InPars: Unsupervised Dataset Generation for Information Retrieval. In SIGIR. ACM, 2387â2392.
[3] Leonid Boytsov, Preksha Patel, Vivek Sourabh, Riddhi Nisar, Sayani Kundu, Ramya Ramanathan, and Eric Nyberg. 2023. InPars-Light: Cost-Effective Unsu- pervised Training of Efficient Rankers. CoRR abs/2301.02998 (2023). | 2307.08303#69 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 69 | Table 7: The score=5.0 examples in Alpaca dataset.
21
Preprint
[Instruction] Instruction: Suggest ways to reduce plastic waste in everyday life. [Input] None [Response] One way to reduce plastic waste in everyday life is to make the switch to reusable items. This includes bringing a reusable water bottle instead of buying bottled water, investing in a reusable coffee cup, bringing your own bags when shopping, and using reusable containers for storing and transporting food. You can also reduce the amount of plastic waste you create by avoiding or limiting the use of take-out or single-use containers, utensils, and straws. Addition- ally, try to find sustainable alternatives to plastic items when shopping and avoid purchasing items wrapped in plastic whenever possible. Finally, make sure you recycle any plastic items you do have. [Score & Explanation]: 4.5.The response provided by the AI assistant is highly accurate and relevant to the given instruction. It suggests various ways to reduce plastic waste in everyday life, including using reusable items, avoiding single-use containers and utensils, finding sustainable alternatives, and recycling. The only potential improvement could be providing more specific examples of sustainable alternatives to plastic items. Overall, the response is informative and helpful in addressing the given instruction | 2307.08701#69 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08303 | 70 | [4] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. (2020).
[5] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. https://lmsys.org/blog/2023-03-30-vicuna/ | 2307.08303#70 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 70 | [Instruction] Generate a list of non-alcoholic drinks. [Input] None [Response] - Fruit juices - Tea - Smoothies - Iced coffee - Milkshakes - Fizzy drinks - Hot chocolate - Lemonades - Kombucha - Energy drinks - Milkshake - Coconut water - Ginger beer - Ice tea - Sparkling water - Aloe vera drink - Vegetable juice [Score & Explanation]: 4.5. The AI assistant provided a comprehensive list of non-alcoholic drinks that are commonly consumed. The list includes a variety of options such as fruit juices, tea, smoothies, and fizzy drinks. However, there are some duplicates in the list such as milkshakes and iced coffee. Additionally, some of the options may not be as commonly consumed as others, such as kombucha and aloe vera drink. Overall, the response is accurate and provides a good range of options for non-alcoholic drinks.
Table 8: The score=4.5 examples in Alpaca dataset.
22
Preprint | 2307.08701#70 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08303 | 71 | [6] Zhuyun Dai, Vincent Y Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B Hall, and Ming-Wei Chang. 2022. Promptagator: Few-shot dense retrieval from 8 examples. arXiv preprint arXiv:2209.11755 (2022).
[7] Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. 2022. Precise Zero-Shot Dense Retrieval without Relevance Labels. CoRR abs/2212.10496 (2022).
[8] Zhiqi Huang, Hansi Zeng, Hamed Zamani, and James Allan. 2023. Soft Prompt Decoding for Multilingual Dense Retrieval. CoRR abs/2305.09025 (2023).
[9] Vitor Jeronymo, Luiz Henrique Bonifacio, Hugo Abonizio, Marzieh Fadaee, Roberto de Alencar Lotufo, Jakub Zavrel, and Rodrigo Frassetto Nogueira. 2023. InPars-v2: Large Language Models as Efficient Dataset Generators for Informa- tion Retrieval. CoRR abs/2301.01820 (2023). | 2307.08303#71 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 71 | Table 8: The score=4.5 examples in Alpaca dataset.
22
Preprint
[Instruction] Instruction: Compare and contrast commercial banks and investment banks. [Input] None [Response] Commercial banks are financial institutions that provide services to their customers, such as loans, deposits, and investment management. Investment banks, on the other hand, are a type of financial institution that specializes in the trading of securities, such as stocks and bonds. The main difference between the two is that commercial banks provide services to customers, while investment banks provide services to governments, corporations, and wealthy individuals. Commercial banks are also subject to more regulations than investment banks, as they offer deposit accounts to customers. Investment banks, however, do not accept deposits from the general public and are subject to fewer regulations. Both types of banks are important for the global economy, but their roles and services are different. [Score & Explanation]: 4.0. The response provided by the AI assistant is mostly accurate and relevant to the given instruction and input. It clearly explains the differences between commercial banks and investment banks, highlighting their respective roles and services. However, it could have provided more specific examples of the services offered by each type of bank, and could have elaborated further on the regulations that apply to them. Overall, the response is informative and helpful, but could benefit from more detail. | 2307.08701#71 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08303 | 72 | [10] Vladimir Karpukhin, Barlas OÄuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. arXiv preprint arXiv:2004.04906 (2020).
[11] Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In SIGIR. ACM, 39â48. [12] Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The Power of Scale for Parameter-Efficient Prompt Tuning. In EMNLP. Association for Computational Linguistics, 3045â3059.
[13] Xiang Lisa Li and Percy Liang. 2021. Prefix-Tuning: Optimizing Continuous Prompts for Generation. In ACL/IJCNLP. Association for Computational Linguis- tics, 4582â4597. | 2307.08303#72 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 72 | [Instruction] Determine the application of statistics in the field of economics [Input] None [Response] Statistics is widely used in economics to analyze and predict markets, examine the performance of companies, and evaluate economic policies. Statistics is used to identify trends and correlations and to measure variables such as prices and wages. Statistical models are also used to predict economic growth and changes in interest rates and to test the effects of regulation and taxation. Statistical tools are also used to estimate the impacts of policy changes, assess the effectiveness of government programs and identify the root causes of problems. [Score & Explanation]: 4.0. The AI assistant provided a fairly accurate response to the in- struction. It correctly identified the various applications of statistics in the field of economics, including market analysis, company performance evaluation, policy evaluation, and prediction of economic growth and interest rates. However, the response could have been more detailed and provided specific examples of statistical methods used in economics. Overall, the response was informative and provided a good overview of the topic
Table 9: The triplets (instruction, input, response) obtain Score 4.0.
[Instruction] Design a company logo. [Input] None [Response] <nooutput> [Score & Explanation]: 2.0. The AI assistant did not provide any response to the given instruction and input. Therefore, it cannot be evaluated for accuracy. | 2307.08701#72 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08303 | 73 | [14] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michi- hiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel J. Orr, Lucia Zheng, Mert Yüksekgönül, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri S. Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022. Holistic Evaluation of Language Models. CoRR abs/2211.09110 (2022). | 2307.08303#73 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 73 | [Instruction] Classify the item as either animal or vegetable. [Input] Banana [Response] Animal: No, itâs a vegetable. [Score & Explanation]: 2.0. The AI assistantâs response is incorrect. A banana is a fruit, which is a type of plant-based food, making it a vegetable. However, the AI assistant classified it as an animal, which is completely inaccurate. Therefore, the score is low.
[Instruction] Design a poster to advertise a product. [Input] None [Response] No output required. [Score & Explanation]: 2.5.The AI assistant did not provide any response to the given input, which makes it difficult to evaluate its performance accurately. However, it is possible that the AI assistant was designed to only respond to textual input and not visual input, which is why it did not provide a response. Therefore, I have given it a score of 2.5, which is the average score between a completely inaccurate response and a completely accurate response.
Table 10: The score=2.0 and score=2.5 examples in Alpaca dataset.
23
Preprint
G RATED EXAMPLES IN DOLLY DATASET | 2307.08701#73 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08303 | 74 | [15] Jimmy Lin, Matt Crane, Andrew Trotman, Jamie Callan, Ishan Chattopadhyaya, John Foley, Grant Ingersoll, Craig MacDonald, and Sebastiano Vigna. 2016. To- ward Reproducible Baselines: The Open-Source IR Reproducibility Challenge. In ECIR (Lecture Notes in Computer Science, Vol. 9626). Springer, 408â420.
[16] Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and Jimmy Lin. 2023. Zero-Shot Listwise Document Reranking with a Large Language Model. arXiv preprint arXiv:2305.02156 (2023).
[17] Macedo Maia, Siegfried Handschuh, André Freitas, Brian Davis, Ross McDer- mott, Manel Zarrouk, and Alexandra Balahur. 2018. WWWâ18 Open Challenge: Financial Opinion Mining and Question Answering. In WWW. ACM, 1941â1942. [18] Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, and Sayak Paul. 2022. PEFT: State-of-the-art Parameter-Efficient Fine-Tuning methods. https://github.com/huggingface/peft.
10 | 2307.08303#74 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 74 | Table 10: The score=2.0 and score=2.5 examples in Alpaca dataset.
23
Preprint
G RATED EXAMPLES IN DOLLY DATASET
[Instruction] What is Jenkins? [Context] None [Response] Jenkins is an open-source automation server. It helps automate the parts of software development related to building, testing, and deploying, facilitating continuous integration and continuous delivery. [Score & Explanation]: 5.0. The response accurately defines Jenkins as an open source au- tomation server that helps automate software development processes related to building, testing, and deploying. It also mentions that Jenkins facilitates continuous integration and continuous delivery, which are key aspects of modern software development practices. Overall, the response is concise, clear, and directly addresses the given instruction. Therefore, I have given it a score of 5.
[Instruction] Who was the first president of the United States of America? [Context] None [Response] George Washington [Score & Explanation]: 5.0. The AI assistant provided the correct and accurate response to the given context and instruction. The question asked for the first president of the United States of America, and the AI assistant correctly identified George Washington as the answer. Therefore, the AI assistant deserves a score of 5 for its accuracy. | 2307.08701#74 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08303 | 75 | 10
[19] Jesse Mu, Xiang Lisa Li, and Noah D. Goodman. 2023. Learning to Compress Prompts with Gist Tokens. CoRR abs/2304.08467 (2023).
[20] Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. choice 2640 (2016), 660.
[21] Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. 2020. Document ranking with a pretrained sequence-to-sequence model. arXiv preprint arXiv:2003.06713 (2020). [22] Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True Few-Shot Learning
with Language Models. In NeurIPS. 11054â11070.
[23] Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, and Michael Bendersky. 2023. Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting. CoRR abs/2306.17563 (2023). | 2307.08303#75 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 75 | [Instruction] What country is to the north of the United States? [Context] None [Response] Canada is Americaâs northern neighbor. [Score & Explanation]: 5.0. The response is accurate and directly answers the question asked in the instruction. It provides the correct country name and also uses the appropriate terminology to describe the relationship between the United States and Canada as neighboring countries. Therefore, the AI assistant has performed well in this context.
Table 11: The score=5.0 examples in Dolly dataset.
24
Preprint | 2307.08701#75 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08303 | 76 | [24] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21, 1 (2020), 5485â5551.
[25] Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends® in Information Retrieval 3, 4 (2009), 333â389.
[26] Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, and Luke Zettlemoyer. 2022. Improving passage retrieval with zero-shot question generation. arXiv preprint arXiv:2204.07496 (2022).
[27] Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2021. Colbertv2: Effective and efficient retrieval via lightweight late interaction. arXiv preprint arXiv:2112.01488 (2021). | 2307.08303#76 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 76 | [Instruction] Help me with spending a day in New Delhi. I like architecture, culture, and am a foodie. [Context] None [Response] North India has a lot to offer for an aesthete, epicure, and a xenophile. Day 1 arrive in Delhi as many flights arrive early in the morning, settle into a hotel of your choice. Today travel around Delhi especially old Delhi and start your trip by visiting Red Fort which is an historical Mogul Fort commissioned by Mogul Emperor Shah Jahan in 1638. The fort is steeped in history and has a beautiful combination of Persian architecture with Indian influence. After that have lunch in Chandi Chowk where you will find incredible street food and especially Karimâs which specializes in Mughlai food. Chandi Chowk is full of verve and energy and you will experience a slice of Indian culture. After the delicious lunch and walking through the narrow streets of Chandi Chowk, it is time to visit Qutb Minar a Unesco World Heritage Site which is considered the tallest minaret in the world at 240 ft constructed out of sandstone in 1192. Close by you can also visit Jamal Kalami mosque and relax | 2307.08701#76 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08303 | 77 | [28] Teven Le Scao and Alexander M Rush. 2021. How many data points is a prompt worth? arXiv preprint arXiv:2103.08493 (2021).
[29] Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent. arXiv preprint arXiv:2304.09542 (2023).
[30] Weng Lam Tam, Xiao Liu, Kaixuan Ji, Lilong Xue, Xingjian Zhang, Yuxiao Dong, Ji- ahua Liu, Maodi Hu, and Jie Tang. 2022. Parameter-efficient prompt tuning makes generalized and calibrated neural text retrievers. arXiv preprint arXiv:2207.07087 (2022).
[31] Zhengyang Tang, Benyou Wang, and Ting Yao. 2022. DPTDR: Deep Prompt Tuning for Dense Passage Retrieval. In COLING. International Committee on Computational Linguistics, 1193â1202. | 2307.08303#77 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 77 | is considered the tallest minaret in the world at 240 ft constructed out of sandstone in 1192. Close by you can also visit Jamal Kalami mosque and relax at the nearby 200 acre park. In the evening you can travel to visit the Lotus Temple in Delhi which is a BahË00e1Ë02bcË00ed House of Worship and architected by Fariborz Sahba in 1976. For dinner, it is time to indulge yourself at Oberoi Delhi at Dhilli under the mentorship of Michelin-starred Chef Vineet Bhatia MBE and finish your night at the rooftop bar Cirrus 9 with a smooth cocktail. [Score & Explanation]: 4.5. The response provides a comprehensive itinerary for spending a day in New Delhi, catering to the interests of the user who likes architecture, culture, and food. The assistant suggests visiting historical sites such as Red Fort and Qutb Minar, experiencing the culture and food of Chandi Chowk, and visiting the Lotus Temple. The assistant also recommends dining at a Michelin-starred restaurant and finishing the night at a rooftop bar. The response | 2307.08701#77 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08303 | 78 | [32] Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogenous benchmark for zero-shot evaluation of information retrieval models. arXiv preprint arXiv:2104.08663 (2021).
[33] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).
[34] Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research 9, 11 (2008).
[35] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems. (2019), 3261â 3275. | 2307.08303#78 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08303 | 79 | [36] Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers. (2020).
[37] Zhen Wang, Rameswar Panda, Leonid Karlinsky, Rogério Feris, Huan Sun, and Yoon Kim. 2023. Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning. In ICLR. OpenReview.net.
[38] Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652 (2021).
[39] Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor nega- tive contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808 (2020). | 2307.08303#79 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08701 | 79 | [Instruction] Why is the sky blue? [Context] None [Response] All the light in our daytime sky comes from the sun, and our sun produces light in many different colors. To understand this, just think of a prism. White light enters, and a rainbow of colors exits. Our sun is like that. Its light may seem uniform but it is actually emitting many different colors. Our sky though is not a mix of blue and red and green because the earthâs atmosphere is sandwiched between the sun and us, getting in the way. Atmospheric gases and particles scatter the incoming colors, bouncing them in all different directions. The amount of scattering increases as the wavelength of light decreases. Red light which has a long wavelength is scattered much less than blue light which has a short wavelength. As blue is scattered more, it fills our daytime sky. As the sun traverses the sky and moves low on the horizon, the light that reaches our eyes must actually travel further through the atmosphere. This has the effect of completely bouncing away all the blue light. We are left with the reds and yellows and oranges in wonderful mixture, making sunrises and sunsets so beautiful. [Score & Explanation]: | 2307.08701#79 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08701 | 80 | We are left with the reds and yellows and oranges in wonderful mixture, making sunrises and sunsets so beautiful. [Score & Explanation]: 4.5. The response accurately explains why the sky appears blue during the day and why it appears red during sunrise and sunset. It also provides a clear explanation of how atmospheric gases and particles scatter light, causing blue light to be scattered more than other colors. The only potential improvement could be to provide a more concise explanation, as the response is somewhat lengthy. Overall, the response is highly accurate and informative. | 2307.08701#80 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08701 | 81 | Table 12: The score=4.5 examples in Dolly dataset.
25
Preprint
[Instruction] What is a bicameral legislature? [Context] None [Response] A legislative body comprised of two separate chambers. [Score & Explanation]: 4.0. The response provided by the AI assistant is accurate and directly answers the question asked in the instruction. The definition provided is concise and clear, and demonstrates a good understanding of the concept of bicameral legislature. However, the response could have been improved by providing additional information or examples to further clarify the concept.
[Instruction] When is tax day? [Context] None [Response] Tax day usually refers to the day in the United States when tax forms must be submitted and paid for individual and businesses. It is typically April 15 or later. [Score & Explanation]: 4.0. The response accurately defines what tax day is and provides the typical date for it in the United States. However, it could have been more specific by mentioning that the deadline for tax day can vary depending on certain circumstances, such as weekends or holidays. | 2307.08701#81 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08701 | 82 | [Instruction] Write a bullet point list of at least 7 crops that are grown in California. [Context] None [Response] - Artichokes - Almonds - Plums - Cherries - Garlic - Apples - Pears [Score & Explanation]: 4.0. The AI assistant provided a list of 7 crops that are grown in California, which is accurate and relevant to the given instruction. However, there are many other crops that are also grown in California, and the assistant could have provided a more comprehensive list. Therefore, the response is not perfect but still satisfactory.
Table 13: The score=4.0 examples in Dolly dataset.
26
# Preprint
[Instruction] How to make a cup of spiced chai? [Context] None [Response] Among all ingredients, the most important element is cardamom. Of course, you cannot skimp on loose black tea leaves too! [Score & Explanation]: 2.5. The response mentions two important ingredients for making spiced chai - cardamom and loose black tea leaves. However, it does not provide any instructions on how to actually make the chai. The response is partially accurate as it addresses the context of making spiced chai but does not fully answer the instruction of providing the steps to make it. | 2307.08701#82 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08701 | 83 | [Instruction] Is Seattle a nice place to live? [Context] Seattle came in at No. 39 in the 2023 Worldâs Best Cities Report, a comprehensive annual study by Resonance Consultancy. Seattle is a seaport city on the West Coast of the United States. The culture of Seattle is heavily defined by its significant musical history. Between 1918 and 1951, nearly 24 jazz nightclubs existed along Jackson Street, from the current Chinatown/International District to the Central District. The jazz scene nurtured the early careers of Ernestine Anderson, Ray Charles, Quincy Jones, and others. The city is also the origin of rock acts such as Foo Fighters, Heart, and Jimi Hendrix, as well as the grunge subgenre of rock and its pioneering bands Alice in Chains, Pearl Jam, Nirvana, and Soundgarden. The city itself is hilly, though not uniformly so. Like Rome, the city is said to lie on seven hills; the lists vary but typically include Capitol Hill, First Hill, West Seattle, Beacon Hill, Queen Anne, Magnolia, and the former Denny Hill. Due to its location in the Pacific Ring of Fire, Seattle is in a major earthquake zone. Temperature extremes are | 2307.08701#83 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08701 | 84 | Queen Anne, Magnolia, and the former Denny Hill. Due to its location in the Pacific Ring of Fire, Seattle is in a major earthquake zone. Temperature extremes are moderated by the adjacent Puget Sound, greater Pacific Ocean, and Lake Washington. Thus extreme heat waves are rare in the Seattle area, as are very cold temperatures (below about 15 F). The Seattle area is the cloudiest region of the United States, due in part to frequent storms and lows moving in from the adjacent Pacific Ocean. With many more "rain days" than other major American cities, Seattle has a well-earned reputation for frequent rain. Seattleâs economy is driven by a mix of older industrial companies and "new economy" internet and technology companies, as well as service, design, and clean technology companies. [Response] Yes. [Score & Explanation]: 2.5. The AI assistantâs response is not accurate enough. While the context provides information about Seattleâs culture, geography, climate, and economy, the instruction asks whether Seattle is a nice place to live. The AI assistantâs response is too brief and does not provide any supporting evidence or explanation to justify its answer. Therefore, the response is not helpful or informative enough for the user. | 2307.08701#84 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08701 | 85 | [Instruction] Best workout in the world. [Context] None [Response] Walk. Just walk as much as you can. [Score & Explanation]: 2.0. The response provided by the AI assistant is not very accurate. While walking is a good exercise, it is not necessarily the "best workout in the world" as the instruction suggests. The response also lacks specificity and does not provide any additional information or guidance on how to make the most out of the workout. Therefore, I would rate the accuracy of the response as 2 out of 5.
Table 14: The score=2.0 and score=2.5 examples in Dolly dataset.
# H ANALYSIS
H.1 ANALYSIS ON WIZARDLM TEST SET We conduct a fine-grained evaluation of ALPAGASUS on each skill/category in the WizardLM and Vicuna test sets, whose samples are split into a list of skill sets/categories and thus facilitate detailed analyses of the capabilities achieved by IFT. | 2307.08701#85 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08701 | 86 | ALPAGASUS-7B(9k) vs. ALPACA-7B(52k). We compare these two 7B models on the WizardLM test set and report the results in Fig. 26. Our ALPAGASUS achieves better or equally good performance than ALPACA on 22/29 skills but does not show advantages on the remaining 7 skills such as coding (e.g., code generation). To investigate the reasons, we notice that the coding categories include
27
Preprint
âpythonâ, âJavaâ, âC++â, and âC#â, which indicate that we can allocate training samples regarding coding skills based on these related keywords (Appendix E). We find that our data selection/filtering, without specifying the proportions of skill categories, leads to a much higher filtering ratio of coding- related data 718â85 = 82.25%. Hence, the resulting coding skill is weaker than other skills. This indicates the importance of keeping the training data diverse and balanced across different categories in IFT.
H.2 ANALYSIS ON VICUNA TEST SET | 2307.08701#86 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08701 | 87 | H.2 ANALYSIS ON VICUNA TEST SET
Writing (TNS EI Writing Ss i AlpaGasus-9k wins Math OR Math Oe aTie Coding Coding ww Alpaca-52k wins Conterfactis! TT Conterfactual i ea Forni TS ron Common TT 2 i TT Common SSSA > Ripy La ipy LE iii iii Knnowiedize St 2 Knowledge smn Generic IG SN Generic A eT
Figure 24: Fine-grained evaluation of ALPAGASUS-13B-9k vs. ALPACA-13B-52k on categories of the Vicuna test set.
Fig. 24 demonstrates the detailed analysis on Vicuna testset. ALPAGASUS-7B is better than the ALPACA-7B in the majority of the categories, including Counterfactual, Roleplay, Knowledge, and Generic, etc. Another strong point is that when the base model scales up, the conclusion still holds. (See right part of the Fig. 24)
# I DETAILED ANALYSIS ON THE WIZARDLM TESTSET | 2307.08701#87 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08701 | 88 | # I DETAILED ANALYSIS ON THE WIZARDLM TESTSET
In Fig. 27, Fig. 28, and Fig. 29, we compare ALPAGASUS with text-Davinci-003, ChatGPT, and Claude, respectively. The results show that ALPAGASUS-13B can achieve ⥠91% capacity of its âteacherâ model, text-Davinci-003 (all the responses in the ALPACA-52k dataset are generated by text-Davinci-003 so we call it âteacherâ LLM). The results also show that our model could achieve pretty good performance on tasks like Writing, RolePlay, Toxicity, Art, etc., while it still needs improvement on coding and math capacity when compared with stronger LLMs.
28
Preprint
Math Code Generation Writing Computer Science Reasoning Complex Format Commonsense Counterfactual RolePlay Biology Technology Ethics Sport History Academic Writing Chemisry NIETO ki
Code Debug Multilingual TruthfulQA Law Medicine Literature Entertainment Art Music Toxicity = AlpaGasus-9k wins Physics Tie Philosophy m Alpaca-52k wins Economy
Figure 25: Fine-grained evaluation of ALPAGASUS-9k(13B) vs. ALPACA-52k(13B) on categories of the WizardLM test set.
29
Preprint | 2307.08701#88 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08701 | 89 | 29
Preprint
Math Writing Computer Science Multilingual Roleplay Technology Ethics Sport Law Economy History Academic Writing Chemistry
Code Generation Reasoning Code Debug Commonsense Literature Philosophy Complex Format Counterfactual TruthfulQa. Medicine Entertainment Music 1m AlpaGasus-9k wins Toxicity aTie m Alpaca-52k wins
# Physics
Figure 26: Fine-grained evaluation of ALPAGASUS-9k(7B) vs. ALPACA-52k(7B) on categories of the WizardLM test set.
30
Preprint
WizardLM Test Set (Skills Details) Alpaca-13B-9k vs. ChatGPT
120.0%
& 100.00% 80.00% 80. ne 60.00% 40.00% i 20.00% 0.00% © oS & & £ ow 2S . x S S g S © EEE ES PSS HOS FEE EE SF oe 8 na SH EHF SK Pe Ses s é «e oS â SX ES e = ees
2 & §
# & a
=
Figure 27: Compare with ChatGPT. Achieve average 78.26% capacity of ChatGPT on all 29 skills.
WizardLM Test Set (Skills Details) Alpaca-13B-9k vs. Claude | 2307.08701#89 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.