id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2308.12284#22 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 8 pearson Coeff: -0.368 Pearson Coeff: -0.188 Pearson Coeff: 0.298 -13.6 -13.8 -14.0 4 0-shot Downstream Accurcy 0-shot Downstream Accurcy 14.24 T T T T T T T T -153 -15.2 -15.1 -15.0 -15.3 -15.2 -15.1 -15.0 -14.2 -14.0 -13.8 -13.6 Negative PPL (Web Snapshot) Negative PPL (Web Snapshot) Negative PPL (Instruct+Answers) Negative PPL (Instruct-+Answers) Figure 6: Correlation between (left): negative Instruct+Answers perplexity and negative web snapshot perplexity, (middle): Downstream accuracy and negative web snapshot perplexity, (right): Down- stream accuracy and negative Instruct+Answers perplexity. Each point is one training configuration (1.3B OPT model, 40B tokens), with the only change being the data selection method and pretraining seed. Web snapshot perplexity is slightly negatively correlated with stronger indicators of LM ability. # 4.4.2 Importance of re-clustering between SemDeDup and SSL Prototypes As mentioned in Section 3.4, we hypothesize that sparsifying dense regions of space containing excessive semantic duplicates improves the clustering quality and is, therefore, critical to the perfor- mance of D4. To isolate the effect of re-clustering on D4, we run experiments with a version of D4 where we remove the re-clustering step (e.g. we keep the original clustering). As shown in Figure 7, omitting the re-clustering step significantly worsens performance, and we observe in the rightmost plot of Figure 7 that SemDeDup indeed removes extremely dense clusters surrounding centroids (e.g. duplicate-driven clusters). We analyze this in more depth in Section A.9. | 2308.12284#21 | 2308.12284#23 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#23 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | â s D4 with reclustering â *â D4 without reclustering Web Snapshots Non Web Snapshots Empirical CDF of Mean Distance to Centroid ? a 10 1520 1625 . (ie Ses eas as uaa Bos Duplicate diven /| E oro Z iss Ny 3 st ass 16.00 ve Â¥ E02 : 5] 15.95 ee a7 yy 0.0 i 200 080 0.60 040 020 0.00 200 080 0.60 040 0.20 0.00 100 080 0.60 040 020 0.00 o2 oa o6 Selection Ratio (R) Selection Ratio (R) Selection Ratio (R) Mean Distance to Centroid # z Figure 7: Investigating the necessity of the re-clustering step in D4. We see that re-clustering improves perplexity across Web snapshots (left), Non-web snapshots (middle-left), and Instruct + Answers (middle-right). Right: Empirical CDF of mean distance to centroid, with and without re-clustering. Re-clustering removes duplicate driven clusters (clusters with low mean distance to centroid). | 2308.12284#22 | 2308.12284#24 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#24 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | # 5 Summary and Limitations We introduced D4, a method for data curation on LLMs that improves training efficiency by 20% across multiple model scales, with larger gains at increased model scale. We also demonstrated that, in contrast to common practice, repeating data via epoching can be beneficial for LLM training, but only if the data subset is intelligently selected. While we have shown encouraging efficiency gains and performance improvements via D4, our work has several limitations and many future directions. Mixing different training distributions: While we chose one data distribution to both select data and train on, modern LLM setups usually mix different data sources. Our method is likely complimentary to such pipelines: practitioners may use D4 to diversify and de-duplicate individual data sources and then mix data sources to provide additional diversity in their training dataset. We leave exploring the efficacy of D4 on a mix of training distributions as future work, but expect that this will yield further gains by reducing redundancy across datasets as well as within datasets. | 2308.12284#23 | 2308.12284#25 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#25 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Model scale: Due to compute limitations, the largest models we evaluated were 6.7B parameters trained on 100B tokens. While, to our knowledge, this is the largest to date application of embedding based data curation approaches, further investigation at model scales exceeding 100B would be very interesting, particularly in light of our observation that the efficiency gain grows with model scale. 9 # 6 Acknowledgements The authors would like to thank many people who helped bring this work to fruition: Srini Iyer, Yuchen Zhang, Todor Mihaylov, Jacob Xu Moya Chen, Mansheej Paul, Mitchell Wortsman, Amro Abbas, Aaditya Singh, Myra Cheng, and Matthew Leavitt. The authors would also like to thank Surya Ganguli, Mona Diab, and Xian Li for initial brainstorming and are grateful for help with compute infrastructure given by Henry Estela and Victoria Lin. Lastly, the authors would like to thank anonymous reviewers for improving the quality and writing of this paper. # References [1] Amro Abbas, Kushal Tirumala, Daniel Simig, Surya Ganguli, and Ari S. | 2308.12284#24 | 2308.12284#26 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#26 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Morcos. Semdedup: Data-efficient learning at web-scale through semantic deduplication. ArXiv, abs/2303.09540, 2023. [2] Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Vic- toria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, et al. Efficient large scale language modeling with mixtures of experts. arXiv preprint arXiv:2112.10684, 2021. [3] Stephen H. Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Févry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-David, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Alan Fries, Maged S. Al-shaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Mike Tian-Jian Jiang, and Alexander M. | 2308.12284#25 | 2308.12284#27 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#27 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Rush. Promptsource: An integrated development environment and repository for natural language prompts. ArXiv, abs/2202.01279, 2022. [4] Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. The pushshift reddit dataset. In Proceedings of the international AAAI conference on web and social media, volume 14, pages 830â 839, 2020. [5] Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle Oâ Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. | 2308.12284#26 | 2308.12284#28 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#28 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Pythia: A suite for analyzing large language models across training and scaling. arXiv preprint arXiv:2304.01373, 2023. [6] Vighnesh Birodkar, Hossein Mobahi, and Samy Bengio. Semantic redundancies in image- classification datasets: The 10% you donâ t need. arXiv preprint arXiv:1901.11409, 2019. [7] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. | 2308.12284#27 | 2308.12284#29 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#29 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Piqa: Reasoning about phys- ical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 7432â 7439, 2020. [8] Andrei Z Broder. On the resemblance and containment of documents. In Proceedings. Com- pression and Complexity of SEQUENCES 1997 (Cat. No. 97TB100171), pages 21â 29. IEEE, 1997. [9] George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A Efros, and Jun-Yan Zhu. | 2308.12284#28 | 2308.12284#30 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#30 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Dataset distillation by matching training trajectories. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pages 4750â 4759, 2022. [10] Kashyap Chitta, José M à lvarez, Elmar Haussmann, and Clément Farabet. Training data subset search with ensemble active learning. IEEE Transactions on Intelligent Transportation Systems, 23(9):14741â 14752, 2021. [11] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. | 2308.12284#29 | 2308.12284#31 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#31 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. [12] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018. [13] Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Llm. int8 (): 8-bit matrix multiplication for transformers at scale. arXiv preprint arXiv:2208.07339, 2022. 10 [14] Bo Dong, Cristian Lumezanu, Yuncong Chen, Dongjin Song, Takehiko Mizoguchi, Haifeng Chen, and Latifur Khan. | 2308.12284#30 | 2308.12284#32 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#32 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | At the speed of sound: Efficient audio scene classification. In Proceed- ings of the 2020 International Conference on Multimedia Retrieval, ICMR â 20, page 301â 305, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450370875. doi: 10.1145/3372278.3390730. URL https://doi.org/10.1145/3372278.3390730. [15] Vitaly Feldman and Chiyuan Zhang. | 2308.12284#31 | 2308.12284#33 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#33 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | What neural networks memorize and why: Discovering the long tail via influence estimation. Advances in Neural Information Processing Systems, 33: 2881â 2891, 2020. [16] Leo Gao, Stella Rose Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The pile: An 800gb dataset of diverse text for language modeling. ArXiv, abs/2101.00027, 2020. | 2308.12284#32 | 2308.12284#34 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#34 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | [17] Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. Donâ t stop pretraining: Adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964, 2020. [18] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. [19] Danny Hernandez, Tom B. Brown, Tom Conerly, Nova DasSarma, Dawn Drain, Sheer El- Showk, Nelson Elhage, Zac Hatfield-Dodds, T. J. Henighan, Tristan Hume, Scott Johnston, Benjamin Mann, Christopher Olah, Catherine Olsson, Dario Amodei, Nicholas Joseph, Jared Kaplan, and Sam McCandlish. | 2308.12284#33 | 2308.12284#35 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#35 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Scaling laws and interpretability of learning from repeated data. ArXiv, abs/2205.10487, 2022. [20] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and L. | 2308.12284#34 | 2308.12284#36 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#36 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Sifre. Training compute-optimal large language models. ArXiv, abs/2203.15556, 2022. [21] Srinivas Iyer, Xiaojuan Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian Oâ Horo, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, and Veselin Stoyanov. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. ArXiv, abs/2212.12017, 2022. | 2308.12284#35 | 2308.12284#37 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#37 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | [22] Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. Unsupervised dense information retrieval with contrastive learning. arXiv preprint arXiv:2112.09118, 2021. [23] Angela H Jiang, Daniel L-K Wong, Giulio Zhou, David G Andersen, Jeffrey Dean, Gregory R Ganger, Gauri Joshi, Michael Kaminksy, Michael Kozuch, Zachary C Lipton, et al. Accelerating deep learning by focusing on the biggest losers. arXiv preprint arXiv:1910.00762, 2019. [24] Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535â 547, 2019. [25] Jared Kaplan, Sam McCandlish, T. J. Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeff Wu, and Dario Amodei. | 2308.12284#36 | 2308.12284#38 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#38 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Scaling laws for neural language models. ArXiv, abs/2001.08361, 2020. [26] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [27] Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. | 2308.12284#37 | 2308.12284#39 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#39 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Deduplicating training data makes language models better. In Annual Meeting of the Association for Computational Linguistics, 2021. [28] Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In Thirteenth international conference on the principles of knowledge representation and reasoning, 2012. [29] Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivas- tava, Ce Zhang, Yuandong Tian, Christopher Re, et al. Deja vu: Contextual sparsity for efficient llms at inference time, 2023. | 2308.12284#38 | 2308.12284#40 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#40 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 11 [30] S. Longpre, Gregory Yauney, Emily Reif, Katherine Lee, Adam Roberts, Barret Zoph, Denny Zhou, Jason Wei, Kevin Robinson, David M. Mimno, and Daphne Ippolito. A pretrainerâ s guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity. ArXiv, abs/2305.13169, 2023. [31] Kristof Meding, Luca M Schulze Buschoff, Robert Geirhos, and Felix A Wichmann. Trivial or impossibleâ dichotomous data difficulty masks model differences (on imagenet and beyond). arXiv preprint arXiv:2110.05922, 2021. | 2308.12284#39 | 2308.12284#41 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#41 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | [32] Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016. [33] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018. [34] Sören Mindermann, Jan M Brauner, Muhammed T Razzak, Mrinank Sharma, Andreas Kirsch, Winnie Xu, Benedikt Höltgen, Aidan N Gomez, Adrien Morisot, Sebastian Farquhar, et al. | 2308.12284#40 | 2308.12284#42 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#42 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Pri- oritized training on points that are learnable, worth learning, and not yet learnt. In International Conference on Machine Learning, pages 15630â 15649. PMLR, 2022. [35] Baharan Mirzasoleiman, Jeff Bilmes, and Jure Leskovec. Coresets for data-efficient training of machine learning models. In International Conference on Machine Learning, pages 6950â 6960. PMLR, 2020. [36] Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. A corpus and evaluation framework for deeper understanding of commonsense stories. arXiv preprint arXiv:1604.01696, 2016. [37] Niklas Muennighoff, Alexander M. Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Nouamane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. | 2308.12284#41 | 2308.12284#43 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#43 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Scaling data-constrained language models. 2023. [38] Mansheej Paul, Surya Ganguli, and Gintare Karolina Dziugaite. Deep learning on a data diet: Finding important examples early in training. Advances in Neural Information Processing Systems, 34:20596â 20607, 2021. [39] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: Outperforming curated corpora with web data, and web data only. [40] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. [41] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485â 5551, 2020. [42] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. | 2308.12284#42 | 2308.12284#44 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#44 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99â 106, 2021. [43] Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo. Are emergent abilities of large language models a mirage? arXiv preprint arXiv:2304.15004, 2023. [44] Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv:1708.00489, 2017. [45] Mohammad Shoeybi, M Patwary, R Puri, P LeGresley, J Casper, B Megatron-LM Catanzaro, et al. Training multi-billion parameter language models using model parallelism. arXiv preprint cs.CL/1909.08053, 2019. [46] Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990, 2022. [47] Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari S. | 2308.12284#43 | 2308.12284#45 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#45 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Morcos. Beyond neural scaling laws: beating power law scaling via data pruning. ArXiv, abs/2206.14486, 2022. 12 [48] Kushal Tirumala, Aram Markosyan, Luke Zettlemoyer, and Armen Aghajanyan. Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems, 35:38274â 38290, 2022. [49] Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. arXiv preprint arXiv:1812.05159, 2018. [50] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurâ elien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. | 2308.12284#44 | 2308.12284#46 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#46 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Llama: Open and efficient foundation language models. ArXiv, abs/2302.13971, 2023. [51] Pablo Villalobos, Jaime Sevilla, Lennart Heim, Tamay Besiroglu, Marius Hobbhahn, and Anson Ho. Will we run out of data? an analysis of the limits of scaling datasets in machine learning, 2022. [52] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. | 2308.12284#45 | 2308.12284#47 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#47 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32, 2019. [53] Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, M. Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddharth Deepak Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Noah A. Smith, Hanna Hajishirzi, and Daniel Khashabi. | 2308.12284#46 | 2308.12284#48 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#48 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. In Conference on Empirical Methods in Natural Language Processing, 2022. [54] Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmâ an, Armand Joulin, and Edouard Grave. Ccnet: Extracting high quality monolingual datasets from web crawl data. ArXiv, abs/1911.00359, 2019. [55] Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, and Adams Wei Yu. | 2308.12284#47 | 2308.12284#49 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#49 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Doremi: Optimizing data mixtures speeds up language model pretraining. ArXiv, abs/2305.10429, 2023. [56] Sang Michael Xie, Shibani Santurkar, Tengyu Ma, and Percy Liang. Data selection for language models via importance resampling. ArXiv, abs/2302.03169, 2023. [57] Fuzhao Xue, Yao Fu, Wangchunshu Zhou, Zangwei Zheng, and Yang You. | 2308.12284#48 | 2308.12284#50 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#50 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | To repeat or not to repeat: Insights from scaling llm under token-crisis. arXiv preprint arXiv:2305.13230, 2023. [58] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019. [59] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. Opt: Open pre-trained transformer language models. ArXiv, abs/2205.01068, 2022. [60] Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. Dataset condensation with gradient matching. arXiv preprint arXiv:2006.05929, 2020. [61] Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. | 2308.12284#49 | 2308.12284#51 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#51 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19â 27, 2015. 13 # A Appendix # A.1 Experimental Setup Details # A.1.1 Hyperparameters for model training As mentioned in Section 3.4, we use the same hyperparameters and configurations as the original OPT model architecture from Zhang et al. [59]. We describe these hyperparameters briefly in Table A1. We chose these configurations because they are openly available and have been used as the standard in many previous works [1, 13, 29, 48, 59]. All models use GELU activation [18], Adam optimizer [26] with β1 = 0.9, β2 = 0.95, ϵ = 10â 8, weight decay set to 0.1, and we clip gradient norms at 1.0. We use a polynomial learning rate schedule, where learning rate warms up from 0.0 to peak learning rate over the first 375 million tokens, and is then annealed to (0.1 * Peak LR) over the remaining (Ttarget â 375) M tokens. We train all our models in fully sharded data parallel mode [2] using Megatron-LM Tensor Parallelism [45] with fp16 precision. For reproducibility (and perhaps the only difference from the original configuration in Zhang et al. [59]) is that we do not use dropout. Table A1: Model architecture details. Most of the parameter configurations are the same as in Table 1 of Zhang et al. [59]. Batch size denotes the total tokens that the model sees during one gradient descent update. 8M 125M 1.3B 6.7B 4 12 24 32 2 12 32 32 128 768 2048 4096 1.0e-3 6.0e-4 2.0e-4 1.2e-4 0.5M 0.5M 1M 2M # A.1.2 Dataset Curation Details In this subsection, we describe how we curate CC-dedup, the starting source dataset used throughout the paper. | 2308.12284#50 | 2308.12284#52 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#52 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | We start with 5 CommonCrawl dumps 2 which range from 2017 to 2020. We then use CC-net [54], to de-duplicate data at the paragraph level, remove non-English web pages, and filter out low-quality pages. The pipeline we use is identical to the pipeline used in Touvron et al. [50] (see the section after the subtitle "English CommonCrawl [67%]", within Section 2). On top of this, we add an additional step of MinHash [8] de-duplication at the document-level. The parameters for MinHash are 20 hashes per signature, 20 buckets, and 1 row per bucket. These parameters are the default parameters in the spark implementation of MinHashLSH, and we did not do a hyperparameter sweep on these parameters due to compute limitations. Previous work has attempted running MinHash with much more aggressive parameters: Lee et al. [27] and Penedo et al. [39] use 20 buckets, 450 hashes per bucket, and 9000 signatures per hash. We conjecture that more aggressive MinHash would remove more templates, resulting in a higher-quality starting dataset, potentially making the SemDeDup step of D4 less necessary. Abbas et al. [1] did find that the performance of MinHash from Lee et al. [27] and SemDeDup are comparable at a fixed data selection ratio of 3.9% on C4, indicating that SemDeDup filters out similar data to aggressive MinHash does. We leave sweeping over these hyperparameters as future work. We note that since our dataset is curated from CommonCrawl dumps, there is risk that our training set contains offensive or PII content. We note, however, that this risk is no more than that of standard language modeling curation such as Touvron et al. [50], since we use the same pipeline to filter CommonCrawl dumps. # A.1.3 Parameters for Data Selection All methods introduced in Section 3.4 involve clustering embeddings using K-Means. Our starting training dataset CC-dedup contains roughly 600 million documents in total. Running K-Means clustering on all 600 million 768-sized vectors would take a considerable amount of compute. Instead, we follow previous work [1, 47] and randomly sample roughly 100M documents with which to | 2308.12284#51 | 2308.12284#53 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#53 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | # 2https://commoncrawl.org/the-data/get-started/ 14 calculate centroids. We normalize the embeddings for these 100M documents to have L2-norm of 1.0, and then use faiss [24] with the following parameters: # faiss.Kmeans( 768 # 125M OPT model embedding size, 11000 # 11K clusters, niter=20 # 20 iterations, verbose=True, seed=0, gpu=False, spherical=True, min_points_per_centroid=1, max_points_per_centroid=100000000 | 2308.12284#52 | 2308.12284#54 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#54 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | ) We choose 11000 clusters following previous work [1] and we note that this choice sticks to the heuristic that the number of clusters should roughly be the square root of the number of total points being clustered. We also note that in initial experiments for data selection at the 125M OPT model scale, we did not find a significant effect of number of clusters on the performance of our data selection methods (see Figure A1) this finding agrees with Abbas et al. [1] who notice significant overlap between datasets selected by SemDeDup with different number of clusters (see Figure A2 in Abbas et al. [1]). | 2308.12284#53 | 2308.12284#55 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#55 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | â ® 10K clusters â @® 1Kclusters 11K clusters â @ 100K clusters â @® 1Mclusters Non-web Snapshots Instruct OPT 1.00x 1.25x 1.67x 2.50x 5.00x infty 1.00x 1.25x 1.67x 2.50x 5.00x__ infty Change in PPL Compared to Baseline Source Dataset Size (}) Figure A1: Effect of number of clusters in K-Means on data selection performance. All models are 125M OPT models, where the training set (and starting source dataset) is C4 and we select data with SSL prototypes. The y-axis is the change in perplexity compared to baseline training, meaning that baseline training is at 0.0, and going down on the graphs indicates better performance. The x-axis is the source dataset size. We show results for average perplexity on Non-web snapshot validation sets (left) and Instruct + Answers (right). We notice that there is not a significant difference when changing number of clusters (e.g. if we drew error bars around each line, they would all be overlapping), but 11K clusters is generally among the top-3 best performing methods. We deliberately set min points per centroids low and max points per centroid high so that faiss does not attempt to manually balance the clusters while doing K-Means. Sorscher et al. [47] found that explicitly class-balancing is important: they introduce the "class balance score" (see Section H of Sorscher et al. [47]) which is the expectation of the quantity size of majority class size of minority class over all pairs of classes. They then set a hard limit for the class balance score of 0.5, meaning that "every class has at least 50% of the images that it would have when pruning all classes equally" [47]. We consider the unsupervised-learning analog of the class-balance score, which we refer to as the "cluster balance" score. The cluster balance score is the expectation of the quantity size of bigger cluster size of smaller cluster over all pairs of clusters. Across all of our data selection methods (and choices for R) we find that this value is generally equal to or bigger than 0.5 without any explicit intervention. | 2308.12284#54 | 2308.12284#56 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#56 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | For this reason, we do not 15 explicitly cluster balance, although we note that changing how many points are sampled from each cluster (based on properties of the cluster) is very interesting future work. D4 parameters: The choice of parameters Rproto and Rdedup while using D4 will have impact on the performance of D4. Given limited compute, we are not able to sweep over these hyperparameters. Instead, we strategically choose these parameters: we first look at the highest value of R in SemDeDup that results in perplexity improvement across validation sets. We choose the "highest value" because the purpose of SemDeDup is to remove duplicate-driven clusters and low R with SemDeDup generally removes more than just templates/semantic duplicates. As seen in Section A.3, this generally occured with Rdedup = 0.75. Thus, we chose Rdedup = 0.75 and varied Rproto to obtain different data selection ratios for D4. # A.1.4 Which validation sets go into the averages? For clarity, we explicitly state the validation sets which we consider "Web Snapshots", "Non Web Snapshots", and "Instruct + Answers" when reporting averages: Web Snapshots: perplexity on validation set of C4, CC-dedup, CommonCrawl (from the Pile) Non-web Snapshots: perplexity other validation sets from the Pile, comprising of OpenWebText2, HackerNews, Wikipedia (en), BookCorpusFair, DM Mathematics, Gutenberg PG-19, OpenSubtitles, and USPTO. Also included in this average is "redditflattened" (validation set from Pusshift.io Reddit [4]), "stories", "prompts_with_answers" (which is described below) and "prompts" (which is the same as "prompts_with_answers" but where each sample is just the instruction-tuning prompt without the answer). Instruct + Answers: perplexity on instruction-tuning data from OPT-IML [21], where each sample contains both the instruction-tuning prompt and the answer (in Figure A4 this is referred to as "prompts_with_answers." While the validation sets in web-snapshots and non-web snapshots are clear (they are either standard open-sourced datasets, or derived from commonly used data), we expect that the "Instruct + Answers" data might be new to some readers. | 2308.12284#55 | 2308.12284#57 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#57 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | We provide a few examples of what this validation set looks like in Table A2. # Table A2: Examples from "Instruct + Answers" validation set # Raw Text Instructions: In this task, you are given two phrases: Head and Tail, separated with <sep>. The Head and the Tail events are short phrases possibly involving participants. The names of specific people have been replaced by generic words (e.g., PersonX, PersonY, PersonZ). PersonX is always the subject of the event. | 2308.12284#56 | 2308.12284#58 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#58 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | You have to determine whether the Head is located or can be found at/in/on the Tail or not. Classify your answers into "Yes" and "No". The phrase may also contain "___", a placeholder that can be an object, a person, and/or an action.Input: Head: PersonX acknowledges gratefully the ___<sep>Tail: to use it Output: No Read the given sentence and if it is a general advice then indicate via "yes". Otherwise indicate via "no". advice is basically offering suggestions about the best course of action to someone. advice can come in a variety of forms, for example Direct advice and Indirect advice. (1) Direct advice: Using words (e.g., suggest, advice, recommend), verbs (e.g., can, could, should, may), or using questions (e.g., why donâ | 2308.12284#57 | 2308.12284#59 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#59 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | t youâ s, how about, have you thought about). (2) Indirect advice: contains hints from personal experiences with the intention for someone to do the same thing or statements that imply an action should (or should not) be taken. Input: Let it go. Output: yes" Instructions: You are given a sentence in English. Your job is to translate the English sentence into Italian. No! Demand to understand. Ask. Answer: No! Esigete di comprendere. Chiedete. Task: In this task you will be given a list of integers. You should round each integer to the nearest tens place. That means you should round the number to the nearest multiple of 10.Input: [528, -636, -686, 368, -433, 992, 886] Answer: [530, -640, -690, 370, -430, 990, 890] 16 | 2308.12284#58 | 2308.12284#60 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#60 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | # A.2 Efficiency gains across model scales and training In this section, we investigate the relationship between model scale, and performance gain obtained by selecting data via D4. Specifically, we train three groups of models: 125M OPT models trained on Ttarget = 3B tokens, 1.3B OPT models trained on Ttarget = 40B tokens, and 6.7B OPT models trained on Ttarget = 100B tokens. We notice in Figure A2 that D4 results in efficiency gains across the board in terms of perplexity. Surprisingly, these efficiency gains seem to increase with scale, indicating that at bigger model scales, D4 might lead to even more efficiency gains. We also see efficiency gains in 0-shot downstream accuracy for 1.3B and 6.7B model scales on the order of 30% for both 1.3B and 6.7B models, but we note that evaluation downstream performance on intermediate checkpoints is not completely fair due to unfinished learning rate schedule. Nonetheless, we see that downstream accuracy efficiency gains are not decreasing with scale. | 2308.12284#59 | 2308.12284#61 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#61 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 17 â Baseline â D4 Non Web Snapshots Instruct + Answers (ppl) 500 1200 6.78% faster 1000 +\ 9.13% fast ~ 400 4 3 2.43} ppl 800 6.79| ppl + 300 improvement 2 improvement 8 Ea Z 600 ps o 200 400 a Fi} 200 ~~ ° 100 L 1000 2000 3000 1000 2000 3000 Non Web Snapshots Instruct + Answers (ppl) 704 FABY deter 120 4 12-007 8 60 0.59 ppl 100-4 2.75 ppl 3 4 improvement improvement w = 9 = 804 a s 40 6 a S YL, 40 a 30 5000 10000 15000 20000 5000 10000 15000 20000 Non Web Snapshots Instruct + Answers (ppl) 0-shot Downstream Acc. " 1 52 | 2727 faster 30.0 30 y 7] us \ 11.39% faster 15.42% faster . 1.13% betepâ || . 0.29 ppl 0.51 ppl Sa tls _ 25.04 i ment 3 improvement Fi © 54 = 1 = so 0 A 20.0 we = 17s Fry - 49 10000 20000 30000 10000 20000 30000 20000 25000 30000 35000 Non Web Snapshots Instructions + Answers (ppl) 0-shot Downstream Acc. 25.0 60 250-45 34.93% faster > i| 3 al 22|18% faste 225 \ 18,08% faste! > | 2oa%petter vai " 0.39 ppl 200 0.34 ppl 559 His = 2004 improvement a improvement Go { a 21754 2 i} a a < 58 ~~ us 150 hoe = |/ WN 15.0 | 12.5 â | 2308.12284#60 | 2308.12284#62 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#62 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 2 57 125 =. 10.0 sx v ¥ 10000 20000 30000 40000 10000 20000 30000 40000 30000 35000 40000 45000 Number of Updates Figure A2: Training trajectory of OPT models trained on raw data (gray line) and data selected via D4 (pink line). Across model scales (1st row: 8M OPT models trained on 2B tokens, 2nd row: 125M OPT models trained on 3B tokens, 3rd row: 1.3B OPT models trained on 40B tokens, 4th row: 6.7B OPT models trained on 100B tokens), we see significant efficiency gains in both perplexity (left two columns) and 0-shot downstream accuracy on 16 NLP tasks (right column). Importantly, we see that increasing model scale does not decrease efficiency gains. All plots show mean and standard error across three seeds, except for the last row. We do not evaluate downstream accuracy for models smaller than 1.3B because they are likely too close to random performance to indicate whether a particular data selection method is better. # Individual Breakdowns of Downstream Accuracy and PPL In Section 4, we see that D4, SSL prototypes, and SemDeDup achieves significant gains on perplexity (averaged across different validation sets) and downstream accuracy (averaged across different NLP tasks) compared to baseline training. Further, we generally see that D4 outperforms SSL prototypes and SemDeDup. In this section, we provide a more fine-grained analysis of these claims across individual tasks. For perplexity, we notice in Figure A4 that the claims in Section 4 generally hold across validation sets: for web snapshots validation sets such C4, CC-dedup, and CommonCrawl, we see performance | 2308.12284#61 | 2308.12284#63 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#63 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 18 # b ts # 3 a Ss a a # a 5 ° 2 a 2 s ° â s baseline â *â semdedup â sâ ssl_prototypes â â D4 | |__â @ 50.0 61.0 4 33.0 5s 35 das daso Sco dao 2 2 2 / 2 x0 A 7 as sas s as â 400 59.0 aus 20 ans ses 100 0.80 0.60 040 020 0.00 100 080 060 0.40 020 000 100 080 060 0.40 020 0.00 100 0.80 060 040 0.20 0.00 40 370 n 456 36.5 as 36.0 n 454 VA z LY z B30 = SS Pass gn a 2 2 § 3? § 3350 i i 225 [= i «© \/ 450 WA oi Nea 20 34.0 be 448 4 â â as 335 10 0.80 0.60 040 020 0.00 100 080 0160 040 020 000 100 080 060 0.40 020 0.00 10 0.80 060 040 0.20 0.00 62.0 eae 696 665 oo 100 080 0, oo 6 6s 080 0.60 040 020 080 0.60 0.40 020 0.00 Selection Ratio (R) 0.00 1.00 0.80 080 0.60 0.40 Figure A3: Per-task breakdown of 0-shot downstream accuracy comparison across data selection methods, for 1.3B, 40B OPT model. For a description of the 16 NLP tasks shown above, see Section 3.4. We note that there is considerable variability across individual downstream tasks. worsens with data selection compared to baseline training, and that D4 generally has the slowest rate of performance degradation. | 2308.12284#62 | 2308.12284#64 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#64 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | We note that, across all non web-snapshot validation sets, there is no clear winner among data selection methods. We emphasize however that we observe consistent improvement over baseline training on most validation sets we use â for example in Figure A4 we observe that, when selecting tokens from a 1.25x source dataset, all data selection methods improve over baseline across all validation sets except C4 and CC-dedup (however, as we explain in Section 4.4, this decrease in performance on C4 and CC-dedup is expected). For downstream accuracy, we chose to match the exact downstream evaluation done in Zhang et al. [59] since we use OPT architecture and hyperparameters. Similar to Zhang et al. [59], we notice considerable variability across the 16 NLP tasks in Figure A3, motivating us to look at the mean downstream accuracy across tasks. | 2308.12284#63 | 2308.12284#65 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#65 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 19 2650 ZA 12.525 t 2645 t 12.500 / 16.40 -< nas VA 310 36.35 if. 22450 Va VA a KAT Les At? 16 1625 12.400 360 37s 1620 V7, 16s Le Wi -~ iad 12.325 100 080 060 040 020 0.00 100 080 060 0.40 0.20 0.00 20 0.00 100 0.80 0.60 0.40 020 0.00 19.20 4.750 a9.as ans 19.10 4.700 19.05 4.675 19.00 4.650 18.95 4.625 18.90 18.85 4.600 18.80 136 4575 100 080 060 040 020 0.00 100 080 060 0.40 0.20 0.00 100 080 0.60 0.40 0.20 0.00 100 0.80 0.60 0.40 020 0.00 23,10 1390 23.05 +4 90 116 23.00 13.85 89 ana 22.95 13.80 g E2290 13.75 \ nas 4 a = 22.80 Na IOS ET» \ 100 080 060 040 020 0.00 100 080 0.60 0.40 0.20 0.00 100 080 0.60 0.40 0.20 0.00 100 080 0.60 0.40 020 0.00 210 12.05 N\ XN : re â GF 139 KS ao Nw Esa RSE. Sf see} | el anes N VW 137 \ 138 \ asa0 # Selection Ratio (R) Figure A4: Perplexity as a function of source dataset size for 1.3B OPT model 40B token training runs, across data selection runs. | 2308.12284#64 | 2308.12284#66 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#66 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Each plot above represents perplexity on an individual validation set (see Section 3.4 for more information). Mean and standard error across 3 seeds is shown (standard error is denoted by shaded regions). # A.4 SSL prototypes and SemDeDup overlap Figure A5 shows the overlap between datasets selected by SemDeDup and SSL Prototypes. While the two methods do not arrive at the same set of data points, there is a significant overlap between the datasets curated by the two methods. We hypothesize that this is because both SSL prototypes and SemDeDup prune away dense regions of space surrounding cluster centroids: by definition, SemDeDup sparsifies dense regions of space within a cluster; similarly, by definition, SSL prototypes will prune away datapoints close to the cluster centroids. Since K-means clustering places centroids in dense regions of space (see Figure A6 where we observe that the distribution of cosine distances to cluster centroid is skewed right), we know that the regions of space surroundings centroids will be dense, and expect SSL prototypes and SemDedup to have significant overlap. Qualitatively, we inspect a few examples of points close to cluster centroids in Figure A3, Figure A4, Figure A5, and see that examples close to cluster centroids can be semantically redundant (e.g. templates). Therefore, it makes sense that any reasonable data selection strategy would prioritize sparsifying these dense regions of space surrounding cluster centroids. As mentioned in Section 3.4, sparsifying these dense regions of space containing excessive semantic duplicates is the original motiviation behind D4. As | 2308.12284#65 | 2308.12284#67 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#67 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 20 Source Dataset Size: 4x (R= 0.25) |, Source Dataset Size: 2x (R = 0.5) 100 Source Dataset Size: 1.33x(R=0.75) 1, 3 - 100.00 90 3 - 100.00 3 - 100.00 = 100,00 100.00 90 semdedup = semdedup ssl_proto Bunrasiaqul eyed Bulured Jo % Bunrasiaqul eyed Bulured Jo % Bunrasiaqul eyed Bulured Jo % random random 75 y 1 y 1 y y 1 D4 semdedup ss!_proto random semdedup ss|_proto random D4 semdedup ss! proto random Figure A5: Similarity between data selection methods. Each square represents the percentage of training data that is intersecting, when selecting data via two different strategies. The x and y axis enumerate different data selection strategies. shown in Figure 7, omitting the re-clustering step significantly worsens performance, and we observe in the rightmost plot of Figure 7 that SemDeDup indeed removes duplicate-driven clusters. Distribution of cosine distance to cluster centroids 1e6 Count 0.2 0.4 0.6 0.8 Distance to cluster centroid Figure A6: Distribution of cosine distance to cluster centroids for 50M randomly selected documents from the training set of CC-dedup. We notice that the distribution is skewed right, implying that datapoints are generally close to centroids. # Investigating Train-Validation overlap As briefly described in Section 4.4, we observe that many of our validation sets are close (in cosine distance) to our training sets, and the impact of data selection is varies across individual validation sets. Individual validation sets live in different regions of the embedding space, and as such they are affected differently by data selection. For example, one could imagine that web-snapshot validation sets such as C4 is close to CC-dedup in the embedding space, while esoteric validation sets (such as Gutenberg PG 19 or DM Mathematics) might be far. To quantify this, we first find the nearest neighbors in the training set to each validation point in all of our validation sets. | 2308.12284#66 | 2308.12284#68 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#68 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | We then qualitatively check (see Table A8 and Table A9 for examples) that nearest-neighbors in the training set truly convey information about validation points. we observe significant overlap between training points and validation points. We then quanitatively analyze how close each validation set is to the training set: in Figure A12, we show the breakdown of this distribution for each validation set. We see a general trend, that web-snapshots validation sets are closest to the training set as they are skewed to the right, while more esoteric validation sets (Gutenberg, or Wikipedia (en)) are more centered or even slightly left-skewed. Motivated by this, we compare validation sets side-by-side (in terms of distance to training set) in Figure 5, and we see a similar trend. To further understand why different validation sets are affected differently by data selection, we loop through each data point in the validation set and record: | 2308.12284#67 | 2308.12284#69 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#69 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 21 â ¢ distance to the training set e.g. how close is the validation point to the training set â ¢ perplexity difference before and after data selection with D4 e.g. how much was this validation point affected by data selection â ¢ original perplexity e.g. how easy was this data point originally In Figure A11, we observe an interesting trend: for web-snapshot validation sets such as C4, the validation points closest to the training set are both (1) the easiest (lowest perplexity) points before data selection and (2) the points most affected by data selection. This seems to indicate that these validation points are "easy" due to their proximity to training points, and when these training points are removed from the training set due to data selection, the close-by validation points become difficult for the model. We do not see this trend on non-web snapshot validation sets such as DM Mathematics and Open Subtitles; in fact, we see an opposite trend where points furthest from the training set are generally most affected by data selection. As a sanity check, we change the sizes of validation sets used to plot Figure 5 in Section 4.4. We see in Figure A8 that controlling for validation set size, we get the same jump going from web-derived to web-independent validation sets. In running this experiment, we are forced to randomly sample if the particular validation set is too big; to ensure that such random sampling does not change the distance to nearest neighbor in the training dataset too much, we vary the amount we sample for three differently sized datasets in Figure A7. We observe that changing the amount we randomly sample from a validation set does not significantly change the mean distance to nearest neighbor in train. We also investigate whether the differences between validation sets in Figure 5 is due to training set size. We would expect that smaller training sets are "further" from validation sets, since (). Indeed we see this in Figure A9. However, we observe that the relative ordering of validation sets (with respect to average distance to the training set) remains the same for any fixed training dataset size. Moreover, we see in Figure A10 that the relative ranking of all validation sets as well as the jump from web-derived to web-independent validation sets from the original Figure 5 holds, even as we reduce training dataset size. | 2308.12284#68 | 2308.12284#70 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#70 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | â e c4 â *â DM _Mathematics â eâ OpenSubtitles Changing validation set sizes 0.45 4 \ 0.40 4 0.30 4 0.25 4 0.20 4 0.9%00090990900000000000009000800000000000000000000000000000000000000000 00090200808 00GCOF09008000900 T T T T T T 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of Validation Set 2 | 2308.12284#69 | 2308.12284#71 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#71 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Mean distance to train nearest neighbor Figure A7: Studying the effect of validation set size on cosine distance to nearest-neighbor in training set. On the x-axis, we vary the size of the validation set (by randomly sampling the original larger validation set), and the y-axis represents distance to nearest neighbor in the training set (averaged across the validation set). We observe that regardless of what fraction of the original validation set is sampled, the mean distance to the nearest neighbor in train does not change, indicating that Figure 5 is not due to different validation set sizes. | 2308.12284#70 | 2308.12284#72 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#72 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 22 # rt # a # rt Figure 5, with each validation set the same size (50 points) 0.74 rT £ 06+ id Ee © _ > 05-4 2 _ _ 2 > | â | ~ oa + _ 8 â ¬ _ $034 x a ry z _ | By a a £ 024 rN a ° A 8 + + 01-4 a â L 1 1 1 fs es & & ss Figure A8: | 2308.12284#71 | 2308.12284#73 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#73 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Investigating whether Figure 5 changes if we control for validation set size. In the Figure above, each validation set contains 50 data points, which is the size of the smallest validation set we use (BookCorpusFair). If a validation set is bigger than 50 data points, we randomly sample the validation set to obtain 50 data points. # c4 # DM_Mathematics # OpenSubtitles # â e â *â â * Distance to Nearest Neighbor in Train vs. Training Set Size Mean distance to train nearest neighbor ~o T T 1 105 10+ 103 102 1o-? 10° Fraction of Training Set | 2308.12284#72 | 2308.12284#74 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#74 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Figure A9: Studying the effect of training set set size on cosine distance to nearest-neighbor in training set. On the x-axis, we vary the size of the training set (by randomly sampling the original training set), and the y-axis represents distance to nearest neighbor in the training set (averaged across the validation set). We observe that cosine distance to the training set increases with smaller training sets, but the relative ordering of validation sets (with respect to mean distance to training set) remains the same. | 2308.12284#73 | 2308.12284#75 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#75 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 23 # F # r # b # L # r Frac of Training Data: 1e-05 Frac of Training Data: 0.0001 Frac of Training Data: 0.001 Frac of Training Data: 0.01 Frac of Training Data: °. °. 02 L Cosine Distance to NN in Train os °. °. os 08 07 06 os 04 03 02 oa 00 Figure A10: Investigating whether Figure 5 changes if we change training set size set size. In the figure above, each plot randomly samples a fraction of the training set (the fraction is denoted by the title of the plot). We see that the relative ranking of the validation sets generally remains the same, and there is consistently a jump between web-derived and web-independent validation sets. | 2308.12284#74 | 2308.12284#76 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#76 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | DM_Mathematics OpenSubtitles c4 = = 20.0 s ra 10.04 ra ra 20.0 ° 0.0 ° 0.0 ° 0.0 4 z 1.60 z & 3.25 iS g E270 4 ids oa L z ir 5 150 = 2.60 = 3.00 Loblolly â . 2 2 250] 2 2s I & 3 5 275 -0.02 44 0.00 0.07 , 2 2 2 ] = -0.03 = 0.01 = 0.06 ; a \ AN a a | 5 Hy 5 & 0.05 : 5 -0.05 5 70.02 5 vy g g g | eo â ¬ ¢ 0.04 gee H 5 -0.03 | a 5 1 5 0.07 5 Wy 5 0.03 Wy E -0.08 i E -0.04 z l 4 0.02 -0.09 0.0 0.1 0.2 0.3 04 0.5 0.6 0.7 08 0.0 0.1 0.2 0.3 04 0.5 06 0.7 08 0.0 0.1 0.2 0.3 04 0.5 0.6 0.7 0.8 Cosine Distance to NN in train (binned) Cosine Distance to NN in train (binned) Cosine Distance to NN in train (binned) Figure A11: (Top): Histogram of cosine distance to nearest neighbor in train. Within each bin, we show the mean original perplexity (middle) and mean difference in perplexity after data selection (bottom), for DM_Mathematics (left), OpenSubtitles(middle), and C4 (right). We note that points in the C4 validation set closest to the training set are both "easy" (perhaps because of proximity to training points) and are affected the most by data selection. We do not see this trend for non-web snapshot validation sets such as DM_Mathematics and OpenSubtitles. | 2308.12284#75 | 2308.12284#77 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#77 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 24 0.1 BookCorpusFair DM_Mathematics HackerNews OpenWebText2 10.0% 20.0% 12.5% 15.0% 10.0% 8.0% 15.0% 15% 6.0% 10.0% 4 10.0% 4 5.0% 4.0% 5.0% 4 25% 2.0% 5.0% 9) 0.0% 0.0% L 0.0% 4 0.0% 0 02 04 06 01 : 00 02 04 06 08 0 8 00 02 04 06 08 g eB @ i 2 Wikipedia_en dialogue_knowledge prompts_with_answers stories c 20.0% 10.0% g 12.5% 10.0% MH â 10.0% 15.0% 8.0% 2.0% S 3 < 75% 6.0% 6.0% a 10.0% = 5.0% 4.0% 4.0% & 8 25% 5.0% 2.0% 20%-+4 s . 0.0% + 0.0% 0.0% + 0.0% 5 00 02 04 06 08 00 02 04 06 08 00 02 04 06 08 00 02 04 06 08 8 5 2 CommonCraw! Gutenberg PG-19 OpenSubtitles USPTO 5 a 20.0% 20.0% 3 10.0% 20.0% . ° 15.0% 4 15.0% 8.0% 2 15.0% 4 â | 2308.12284#76 | 2308.12284#78 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#78 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | a 0.0% 6.0% 8 10.0% 4 . 10.0% 4 4.0% § 5.0% 4 5.0% 5.0% 4 FA 2.0% 4 : o £ 0.0% 0.0% 0.0% 0.0% + = 0.0 02 04 06 08 00 02 04 06 08 0.0 02 04 06 08 00 02 04 06 08 ⠬ 3 4 prompts redditflattened is) 20.0% 20.0% 20.0% 15.0% 15.0% 15.0% 10.0% 4 10.0% 10.0% 4 5.0% 4 5.0% 5.0% 4 0.0% 0.0% 0.0% + 00 02 04 06 08 00 02 04 06 08 00 02 04 06 08 0.8 Cosine Distance to Nearest Neighbor in Train Figure A12: Distribution of cosine distance to nearest neighbor in the training set, for each individual validation set. # A.6 Further investigation of repeating tokens In this section, we investigate whether the findings from Section 4.2 hold across model scale, data selection ratio (e.g. number of epochs), and data selection method. Across data selection methods: We first take the same configuration as Section 4.2, where we have a starting source dataset of 40B tokens, use each of our data selection methods with R = 0.25 to select a subset of documents, and repeat over these documents until we reach the target token budget of 40B tokens. Note that this is at the 1.3B model scale. In Figure A13 we see that repeating data selected by both SemDeDup and SSL prototypes also outperforms randomly selecting new data. However, we quickly notice that for fixed data selection strategy (e.g. fixed column in Figure A13), repeating tokens either outperforms or matched selecting new tokens. | 2308.12284#77 | 2308.12284#79 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#79 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | In other words: cleverly repeating tokens can outperform randomly selecting new tokens, but if we fix the data selection strategy (random, SemDeDup, SSL prototypes, or D4) then it is usually preferable to select new tokens. We also note in Figure A16 that D4 outperforms other methods, although by a smaller margin than in the fixed-compute regime. Across model scale and data selection ratio: We fix our data selection strategy as D4 as done in Section 4.2, but attempt repeating tokens across 3 model scales (125M, 1.3B, and 6.7B), and across | 2308.12284#78 | 2308.12284#80 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#80 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 25 â Random (New Tokens) â D4 (New Tokens) â SSL proto (New Tokens) â SemDeDup (New Tokens) ---- Random (Repeated Tokens) ---- D4 (Repeated Tokens) ---- SSL proto (Repeated Tokens) ---- SemDeDup (Repeated Tokens) off aot off \ . \ N IN \ z 18 E 0 [ Non Web Snapshots ppl PPL PPL ans uo 165 16.0 zo000 20000 = 30000 zoo00 20000 ©=â «30000 aod00 â 20000-â â «30000 aodo0 20000-30000 Random ba SSL Proto SemDedup wT | = Lf] PPL Instruct + Answers ppl zoo00 20000-30000 zoo00 20000 = 30000 aod00 â -20000~â â «30000 aoo00â 20000~â â «30000 Num Updates Figure A13: Effect of repeating tokens across data selection methods over training. X-axis denotes the number of updates, and the y-axis denotes average perplexity across non-web-snapshot validation sets (top row) and Instruct OPT (bottom row). Each column in the plot above denotes a different data selection method. Within each column: (1) the gray line denotes baseline training, (2) the colored-dashed line denotes repeating tokens via the specified data selection method, and (3) the colored-solid line denotes selecting new tokens via the specified data selection method. Repeating data is generally worse than selecting new data for a fixed data selection method (e.g., fixed column). data selection ratios (R = 0.5 and R = 0.25). We see in Figure A15 that repeating data with D4 outperforms randomly selecting new tokens across all model scales and choice of R. We note that for fixed R, different data selection methods will choose subsets of the source dataset that contain different amounts of tokens. This means that different data selection methods will epoch a different number of times. | 2308.12284#79 | 2308.12284#81 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#81 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | For example, for a 1.3B OPT model 40B token budget training run, if randomly repeating data with R = 0.25 chooses a subset with 10B tokens and D4 with R = 0.25 chooses a subset with 15B tokens, then the random run will epoch 4 times while the D4 run will epoch 2.67 times. To show this more clearly, we plot 1.3B and 6.7B repeated data runs with the x-axis changed to number of epochs in Figure A14. We see that up to roughly 2 epochs of data chosen with D4 significantly outperforms randomly selected new data; however, close to 5 epochs leads to worse performance. Random (repeated tokens) â e~ D4 (repeated tokens) 1.3B, Non Web Snapshots 1.3, Instruct + Answers (ppl) 6.7B, Non Web Snapshots 6.78, Instruct + Answers (ppl) 1s as \ was \ 10.950 2 Bao 0.925 1295 {5 â fiosas Mo le Number of Epochs PPL Figure A14: Comparison of repeating tokens with D4 (pink line), randomly selecting new tokens (horizontal dashed gray line), and randomly repeating data (gray line). We see with different epoch numbers. The y-axis denotes perplexity, and x-axis denotes number of epochs. | 2308.12284#80 | 2308.12284#82 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#82 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 26 Random (repeated tokens) â @®-â D4 (repeated tokens) Web Snapshots Non-web Snapshot Instruct + Answers (ppl) w 3 el § =e Sea 2 a â [= rl A Ss 1.00 0.80 0.60 0.40 0.20 0. 2 156 og 16.8 14.75 S Ps [714.50 ze 6 14.25 â â }_}_}â Gal 4.00 a T t T 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 2 ee 1.00 @ 1215 3.2 S 2 10.95 2 12.10 13.1 S 12.05 10.90 ; 13.0 2 12.00 10.85 ° ne SU nS tT T 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 Selection Ratio (R) # a Figure A15: Comparison of repeating tokens with D4 (pink line), randomly selecting new tokens (horizontal dashed gray line), and randomly repeating data (gray line). We see across model scales (top: 125M trained on 3B tokens; middle: 1.3B trained on 40B tokens; bottom: 6.7B trained on 100B tokens) and data selection ratios, repeating data selected by D4 outperforms randomly selecting new data. | 2308.12284#81 | 2308.12284#83 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#83 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | a a Random (repeated tokens) â ®- SemDeDup (repeated tokens) â ®-â SSL Prototypes (repeated tokens) â @®-â D4 (repeated tokens) Non-web snapshots Instruct + Answers (ppl) 31.75 = 39 | ee ey 31.50 â - Sâ â â 38 31.25 37 31.00 36 30.75 35 30.50 a a s a | 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 Selection Ratio (R) Figure A16: Comparison data selection methods when repeating data at the 125M, 3B token budget scale. The x-axis is data selection ratio R, and the y-axis is average perplexity on validation sets. We observe that selecting data to repeat via D4 outperforms other data selection methods, especially at low selection ratios R (note that low selection ratios in the fixed-data regime correspond to more epochs). # A.7 Choice of Embedding Space All data selection methods we employ rely heavily on the quality of the underlying embedding space. We qualitatively analyzed the embedding produced by the last-token last-layer OPT 125M model and observed a bias towards end-of-document format. For example, if documents all end with an email or a standard phrase ("Buy our product today!"), then these documents would be clustered together. This likely helps detect templates (since templates tend to end their text in very similar ways), but has | 2308.12284#82 | 2308.12284#84 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#84 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 27 clear pitfalls â for example, if we took thousands of wikipedia articles about unrelated topics and appended the same email at the end of each article, they might be clustered together. Motivated by this, we briefly experiment with different embedding spaces and discuss our results in this section. # A.7.1 SentenceTransformer models BERT embeddings have generally been used to accomplish various NLP tasks, because BERT (unlike GPT/OPT) is able to attend to every token in the input when producing an embedding (BERT is a encoder-decoder model, while OPT/GPT are decoder only). While there are numerous BERT-style models available, we hoped to achieve an embedding space that focused on semantic similarity. Thus, we opted to use the widely popular SentenceTransformer models 3, which are BERT-style models finetund specifically >1B text similarity pairs. We choose the top model on the SentenceTransformer leaderboard (all-mpnet-base-v2) and the smallest well-performing model (all-Mini-LM-v6). Note that these models have max context length of 256 and 384 (respectively), and we stuck with the SentenceTransformer default of truncating inputs to fit the max sequence length (i.e. these embeddings only consider the beginning of documents). We observe, in Figure A17 that at small model scales, sentence transformer embedding spaces outperforms the OPT embedding space. Given these initial results, we took our most overall-all efficient embedding space at the 1.3b model scale ("all-mini-lm-v6") and ran a 6.7b training run with it. Surprisingly, we observed that at larger model scale, the OPT embedding space outperforms the "all-mini-LM-v6" embedding space. Given that the difference between "all-mini-LM-v6" and "all-mp-net-base-v2" is generally small (see Figure A17), we also expect the OPT embedding space to beat "all-mpnet-base-v2" at the 6.7b, although we were not able to complete this run due to compute restrictions. We see the same trend when we consider overall and naive efficiency of using D4 with different embedding spaces in Figure A18. In an effort to understand why SentenceTransformer embedding spaces perform worse at larger model scales, we qualitatively analyze the clusterings with each SentenceTransformer embedding space. | 2308.12284#83 | 2308.12284#85 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#85 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | We find that using D4 with "all-mp-net-base-v2" and "all-mini-lm-v6" disproportionately prunes long documents. We hypothesize that this is because sentence transformer models are trained and finetuned on actual sentence pairs, which very rarely saturate the max context length of the model. This might result in all "long" documents (or at least any input that is max-context-length size) seem out-of-distribution to the model. We guess that this results in long documents being clustered together, and therefore disproportionately affected during pruning. This might be especially relevant in domains like Wikipedia articles, where headers and introductions look semantically similar, but the actual content (past the first max-context-length tokens) is very different. In an effort to circumvent this problem, we tried two approaches at a small model scale: | 2308.12284#84 | 2308.12284#86 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#86 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | â ¢ M1: Chunking long documents into max-context-length chunks, and averaging all-mini- LM-v6 embeddings across chunks to produce a final document embedding. â ¢ M2: Using Contriever [22] embeddings, where we chose the Contriever model because it is trained to determine if two sentences are from the same document, and therefore should be agnostic to position within a document. Both in terms of perplexity improvement at the end of training (see Figure A19) and efficiency (see Figure A18) we do not observe a significant difference between the OPT embedding space and embedding spaces M1 and M2 at the small model scale (125 million parameters). We note that M1 and M2 are significantly worse than the all-mp-net-base-v2 and all-mini-LM-v6 at small scales and suffer from the same problem of pruning away long documents (compared to the OPT embedding space), so we expect these models to under-perform the OPT embedding space at the 6.7b scale. # 3https://www.sbert.net/docs/pretrained_models.html | 2308.12284#85 | 2308.12284#87 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#87 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 28 â eâ all mp net base v2 # â e # all mini lm v6 â eâ # OPT Non Web Snapshots Instruct+Answers (ppl) 94 + TS ee Se a | 92 = 110 90 105 88 100 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 315 een ee | = 31.0 in S 30.5 30.0 4 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 a a SS | 14.2 == 16.2 14.0 a â 4 16.0 13.8 15.8 13.6 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 Non Web Snapshots Instruct+Answers (ppl) SS ee eee 08 â 13.0 107 2129 6 10.6 12.8 10.5 12.7 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 Selection Ratio (R) Figure A17: Perplexity (y-axis) versus selection ratio R (x-axis) for different embedding spaces, when selecting data via D4. Across different 8m (top), 125m (middle) and 1.3b (bottom) model scales, we see that the SentenceTransformer embedding spaces outperform the OPT embedding space, but at the 6.7b model scale, we see that the OPT embedding space begins outperforming the all Mini LM v6 embedding space. | 2308.12284#86 | 2308.12284#88 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#88 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | We were unable to run an "all-mp-net-base-v2" 6.7b experiment due to compute restrictions, but we note that the difference between "all-mini-lm-v6" and "all-mp-net-base-v2" across model scales and selection ratios is generally small, so we expect the OPT embedding space to outperform the "all-mp-net-base-v2" at the 6.7b scale. 29 Non Web Efficiency Instruct + Answers Efficiency N 3 N a -@- OPT -@- all minilm v6 -#- all mp net base v2 -@- avg chunk, all mini lm v6 -*- contriever N 8 a a ° 10 w Efficiency Gain (% Compute Saved) Efficiency Gain (% Compute Saved) ° ° 10â 108 10° 10â 108 10° Model Size (Log Scale) Model Size (Log Scale) Figure A18: Comparison of naive efficiency for different embedding spaces, when using D4 as the data selection strategy. Similar to Figure A17, we see that all-mini-LM-v6 outperforms the OPT embedding space at small scale, but not at large (6.7b) model scale. | 2308.12284#87 | 2308.12284#89 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#89 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | â eâ OPT â eâ avg chunk, all mini Im v6 â eâ contriever Non Web Snapshots Instruct+Answers (ppl) 39.0 -â â â ___ 31.6 38.5 31.4 38.0 37.5 _j 312 37.0 a a 36.5 31.0 36.0 30.8 35.5 35.0 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 Selection Ratio (R) Figure A19: Comparison of embedding spaces M1 (averaging embedding of all-mini-LM-v6 across all chunks in a document, where a chunk is defined as 256 tokens) and M2 (embeddings from the Contriever model), with the OPT model embedding space, when using D4 as a the selection strategy. We note that neither embedding space signifigantly outperforms the OPT model embedding space at the 125M scale. | 2308.12284#88 | 2308.12284#90 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#90 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 30 # A.8 Replicating Fixed Compute Results on C4 In this section, we briefly show our results for comparing data selecting methods at the 125M scale, where the pre-training dataset is the C4 [41] dataset instead of CC-dedup. We see in Figure A20 that D4 generally outperforms other methods. These initial experiments motivates us to try comparing data selection methods on more heavily filtered web-data (i.e. CC-dedup). â ® SSL Prototypes â ®- SemDeDup â e D4 2 Non-web snapshots Instruct OPT 8 ° 0.0 2 3 -0.5 8 g 71 5 -1.0 iS} g -15 = 2 E | -2.0 5 1.00 080 060 040 0.20 0.00 1.00 080 060 040 0.20 0.00 & Selection Ratio (R) Figure A20: Comparison of data selection strategies with the OPT model embedding space, when using D4 as a the selection strategy, when using C4 as the starting training dataset. The x-axis is selectoin ratio R, and the y-axis is perplexity difference compared to baseline (the horizontal gray dotted line at 0.0 represents our baseline i.e. when no data selection is done), so lower is better. Notice that D4 and SemDeDup match at 90%, because we use Rdedup = 0.9 and varied Rproto for this experiment. | 2308.12284#89 | 2308.12284#91 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#91 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 31 # Investigating Duplicate-Driven Clusters In this subsection, we present a few examples of duplicate-driven clusters, which are clusters that are very dense and near centroids. We find that these clusters tend to be filled with semantic duplicates and/or duplicated text. We generally can find such extreme duplicate-driven clusters by looking at clusters whose standard deviation of cosine distance to cluster centroid is less than 0.03. This is essentially looking at clusters in the lower tail of the empirical CDF in Figure 7 (brown line). We present a few examples of such clusters below: Table A3: Nearest Neighbors to Cluster Centroid 682 Cosine Distance to Centroid Raw Text 0.03581655 0.03584063 0.036803484 0.037270606 The USGS (U.S. Geological Survey) publishes a set of the most com- monly used topographic maps of the U.S. called US ......... may have differences in elevation and topography, the historic weather at the two separate locations may be different as well. The USGS (U.S. Geological Survey) publishes a set of the most com- monly used topographic maps of the U.S. called US ......... may have differences in elevation and topography, the historic weather at the two separate locations may be different as well. The USGS (U.S. Geological Survey) publishes a set of the most com- monly used topographic maps of the U.S. called US ......... may have differences in elevation and topography, the historic weather at the two separate locations may be different as well. | 2308.12284#90 | 2308.12284#92 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#92 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Search Near Clinton County, OH: Trails National and State Parks City Parks Lakes Lookouts Marinas Historical Sites The USGS (U.S. Geolog- ical ......... may have differences in elevation and topography, the historic weather at the two separate locations may be different as well. Table A4: Nearest Neighbors to Cluster Centroid 975 Cosine Distance to Centroid Raw Text 0.011662006 0.012483656 0.012564898 0.012756169 The American Way, Inc. The American Way, Inc. is a suspended Californian business entity incorporated 19th August 1949. is listed as ......... for bulk data downloadsI want to request the removal of a page on your websiteI want to contact California Explore John St-Amour, Inc. John St-Amour, Inc. is a suspended Californian business entity incorporated 5th October 1962. is listed as the agent ......... for bulk data downloadsI want to request the removal of a page on your websiteI want to contact California Explore Joseph E. Barbour, Inc. Joseph E. Barbour, Inc. is a suspended Califor- nian business entity incorporated 27th January 1959. is listed as ......... for bulk data downloadsI want to request the removal of a page on your websiteI want to contact California Explore The Jolly Boys, Inc. The Jolly Boys, Inc. is a suspended Californian business entity incorporated 4th March 1955. is listed as ......... for bulk data downloadsI want to request the removal of a page on your websiteI want to contact California Explore | 2308.12284#91 | 2308.12284#93 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#93 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 32 Table A5: Nearest Neighbors to Cluster Centroid 10715 Cosine Distance to Centroid Raw Text 0.035506427 0.036230028 0.036280274 0.036827266 Search hundreds of travel sites at once for hotel deals at Hotel Olympic Kornarou Square 44, Heraklion, Greece 34 m Bembo Fountain 262 ......... hundreds of travel sites to help you find and book the hotel deal at Hotel Olympic that suits you best. Search hundreds of travel sites at once for hotel deals at Hotel Estrella del Norte Juan Hormaechea, s/n, 39195 Isla, Cantabria, ......... travel sites to help you find and book the hotel deal at Hotel Estrella del Norte that suits you best. Search hundreds of travel sites at once for hotel deals at H10 Costa Adeje Palace Provided by H10 Costa Adeje Palace Provided ......... travel sites to help you find and book the hotel deal at H10 Costa Adeje Palace that suits you best. Search hundreds of travel sites at once for hotel deals at Hotel Miguel Angel by BlueBay Calle Miguel Angel 29-31, 28010 ......... sites to help you find and book the hotel deal at Hotel Miguel Angel by BlueBay that suits you best. | 2308.12284#92 | 2308.12284#94 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#94 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Table A6: Random Examples from Cluster 695 Cosine Distance to Cluster Centroid Raw Text 0.044178426 0.056984067 0.0534693 0.06892538 0.07246786 0.07147932 Eastern Florida State College nutritional sciences Learn about Eastern Florida State College nutritional sciences, and registering for electives. Which college degrees ......... System (IPEDS). If any stats on Hagerstown Community College career planning are incorrect, please contact us with the right data. Albany State University introduction to business Find info con- cerning Albany State University introduction to business, and registering for elective discussion sections ......... If any stats on Warren County Community College plant science major are incorrect, please contact us with the right data. Baldwin Wallace University cost per unit Learn about Baldwin Wallace University cost per unit, submitting required application forms, and follow-up scheduling. ......... (IPEDS). If any stats on San Jose State nursing degree programs are incorrect, please contact us with the right data. Niagara University managerial accounting Information about Niagara University managerial accounting, and registering for elective lectures. Which college degrees give you the ......... Sys- tem (IPEDS). If any stats on Midwestern University pharmacy tech program are incorrect, please contact us with the right data. Fanshawe College app download Learn about Fanshawe College app download, and registering for elective discussion sections and seminars. Which college degrees ......... Data System (IPEDS). If any stats on Stratford University cell biology are incorrect, please contact us with the right data. | 2308.12284#93 | 2308.12284#95 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#95 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Standish Maine Licensed Vocational Nurse LVN Jobs Find out about Standish, ME licensed vocational nurse LVN jobs options. Itâ s a smart ......... (IPEDS). If any stats on William Jewell College medical insurance coding are incorrect, please contact us with the right data. 33 Table A7: Random Examples from Cluster 8342 # Cosine Distance to Cluster Centroid Raw Text 0.027729392 0.036407113 0.017463684 0.02616191 0.028420448 0.037917078 Seenti - Bundi Seenti Population - Bundi, Rajasthan Seenti is a medium size village located in Bundi Tehsil of Bundi district, Rajasthan ......... 6 months. | 2308.12284#94 | 2308.12284#96 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#96 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Of 186 workers engaged in Main Work, 63 were cultivators (owner or co-owner) while 0 were Agricultural labourer. Kodunaickenpatty pudur - Salem Kodunaickenpatty pudur Pop- ulation - Salem, Tamil Nadu Kodunaickenpatty pudur is a large village located in Omalur Taluka of ......... 6 months. Of 3523 workers engaged in Main Work, 1500 were cultivators (owner or co-owner) while 1533 were Agricultural labourer. Chhotepur - Gurdaspur Chhotepur Population - Gurdaspur, Pun- jab Chhotepur is a medium size village located in Gurdaspur Tehsil of Gurdaspur district, Punjab ......... 6 months. Of 677 workers engaged in Main Work, 123 were cultivators (owner or co-owner) while 142 were Agricultural labourer. Maksudanpur - Azamgarh Maksudanpur Population - Azamgarh, Uttar Pradesh Maksudanpur is a small village located in Sagri Tehsil of Azamgarh district, Uttar ......... 6 months. Of 22 workers engaged in Main Work, 14 were cultivators (owner or co-owner) while 0 were Agricultural labourer. Karambavane - Ratnagiri Karambavane Population - Ratnagiri, Maharashtra Karambavane is a medium size village located in Chiplun Taluka of Ratnagiri district, Maharashtra ......... 6 months. Of 444 workers engaged in Main Work, 116 were cultivators (owner or co-owner) while 214 were Agricultural labourer. Barda - Purba Medinipur Barda Population - Purba Medinipur, West Bengal Barda is a large village located in Egra - I Block ......... 6 months. Of 1182 workers engaged in Main Work, 278 were cultivators (owner or co-owner) while 252 were Agricultural labourer. | 2308.12284#95 | 2308.12284#97 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#97 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 34 Table A8: Nearest Neighbors to random validation point in C4 0.0(original validation text) Offers two child care opportunities to Charles County citizensâ the Port Tobacco Onsite Child Care Program and the Before and After School Child Care Program (BASCC). Supports parents through home visits to first time parents and by helping them search for child care, find resources for a child with social, emotional . . . . . . . . Special needs kids. Free to look, a fee to contact the providers. Hotline is staffed by highly-trained and friendly Child Care Consumer Education Specialists who offer both parents and providers invaluable information about child care, and referrals to local Child Care Resource and Referral agencies where they can receive individualized assistance. Child Care Options is a program of Options Community Services , a non-profit registered charity dedicated to making a difference in the South Fraser Region. Options is committed to empowering individuals, supporting families and promoting community health. Funding for Child Care Options is provided through British Columbiaâ s Ministry of Children . . . . . . . . | 2308.12284#96 | 2308.12284#98 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#98 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Rock. Child Care Options links families and child care providers in the communities of Delta, Surrey and White Rock by offering free consultation, support and child care referral services and subsidy support to parents seeking child care. Child care providers are supported through information, outreach, resource library, networking, and learning opportunities. Below are links to child development resources, both from within the department and from external sources. Child Development Division Publications Publications that can help you will help you follow your childâ s development (from birth to age five) so you can identify and address any issues early on. Resources to help you understand childrenâ s . . . . . . . . families to local resources and services. Specialists are available from 9 AM to 6 PM Monday â Friday. Services are confidential. | 2308.12284#97 | 2308.12284#99 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#99 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Caregivers can also visit http://www.helpmegrowvt.org/families.html to learn more about child development, discover developmental tips, and watch videos demonstrating childrenâ s developmental milestones (click a button to choose your childâ s age). National Domestic Violence Hotlines Programs that provide immedi- ate assistance for women and men who have experienced domestic abuse which may include steps to ensure the personâ s safety; short- term emotional support; assistance with shelter; legal information and advocacy; referrals for medical treatment; ongoing counseling and/or group support; and other related services. Hotline . . . . . . . . RP- 1500.1400-200) www.thehotline.org/ Toll Free Phone: 800-799-SAFE URL: https://www.thehotline.org/ Eligibility: Anyone affected by rela- tionship abuse. Services Provided: Available 24/7/365 via phone, TTY, and chat. Provides lifesaving tools and immediate support to enable victims to find safety and live lives free of abuse. Highly trained, ex- perienced advocates offer support, crisis intervention, education, safety planning, and referral services. | 2308.12284#98 | 2308.12284#100 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#100 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | 35 Table A9: Nearest Neighbors to random validation point in USPTO 0.0(original validation text) 0.1998944878578186 0.21122217178344727 . . . . 0.2133803367614746 . . SONET (Synchronous Optical NETwork) is a North American transmis- sion standard for optical communication systems. SDH (Synchronous Digital Hierarchy), a European transmission standard, is a minor variant of SONET. SONET defines a hierarchy of electrical signals referred to as Synchronous Transport Signals (STS). The STS hierarchy is built upon a basic signal . . . . . . . . the corresponding row and column numbers may include up to 18 comparison operations, which are onerous to implement, for example, in terms of the required logic circuitry. This problem is exacerbated at the upper levels of the STS hierarchy, where processing of multiple pointer values per data frame is performed. US20080109728A1 - Methods and Systems for Effecting Video Transi- tions Represented By Bitmaps - Google Patents Methods and Systems for Effecting Video Transitions Represented By Bitmaps Download PDF David Maymudes Multi-media project editing methods and systems are described. In one embodiment, a project editing system comprises a . multi-media editing application that is configured to . synchronization models for multimedia data US20120206653A1 (en) 2012-08-16 Efficient Media Processing US6658477B1 (en) 2003-12-02 Improving the control of streaming data through multiple processing modules US6212574B1 (en) 2001-04-03 User mode proxy of kernel mode operations in a computer operating system US7752548B2 (en) 2010-07-06 Features such as titles, transitions, and/or effects which vary according to positions Both the Ethernet II and IEEE 802.3 standards define the minimum frame size as 64 bytes and the maximum as 1518 bytes. This includes all bytes from the Destination MAC Address field through the Frame Check Sequence (FCS) field. The Preamble and Start Frame Delimiter fields are not included when . . . . . . . . frame. Dropped frames are likely to be the result of collisions or other unwanted signals and are therefore considered invalid. At the data link layer the frame structure is nearly identical. | 2308.12284#99 | 2308.12284#101 | 2308.12284 | [
"2006.05929"
]
|
2308.12284#101 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | At the physical layer different versions of Ethernet vary in their method for detecting and placing data on the media. A byte is a group of bits, usually eight. As memory capacities increase, the capacity of chip cards is often quoted in bytes rather than in bits as in the past. 36 | 2308.12284#100 | 2308.12284 | [
"2006.05929"
]
|
|
2308.12033#0 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | 3 2 0 2 g u A 3 2 ] L C . s c [ 1 v 3 3 0 2 1 . 8 0 3 2 : v i X r a # PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine Chenrui Zhang, Lin Liu**, Jinpeng Wang!, Chuyuan Wang', Xiao Sun', Hongyu Wang!', Mingchen Cai! 'Meituan Inc., Beijing, China *Beijing Jiaotong University, Beijing, China â ¢chenrui.zhang @pku.edu.cn, [email protected], {wangjinpeng04,wangchuyuan, sunxiao10,wanghongyu15,caimingchen} @meituan.com | 2308.12033#1 | 2308.12033 | [
"2305.03495"
]
|
|
2308.12033#1 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | # Abstract As an effective tool for eliciting the power of Large Lan- guage Models (LLMs), prompting has recently demonstrated unprecedented abilities across a variety of complex tasks. To further improve the performance, prompt ensemble has at- tracted substantial interest for tackling the hallucination and instability of LLMs. However, existing methods usually adopt a two-stage paradigm, which requires a pre-prepared set of prompts with substantial manual effort, and is unable to per- form directed optimization for different weak learners. In this paper, we propose a simple, universal, and automatic method named PREFER (PRompt Ensemble learning via Feedback- REflect-Refine) to address the stated limitations. Specifically, given the fact that weak learners are supposed to focus on hard examples during boosting, PREFER builds a feedback mechanism for reflecting on the inadequacies of existing weak learners. Based on this, the LLM is required to automat- ically synthesize new prompts for iterative refinement. More- over, to enhance stability of the prompt effect evaluation, we propose a novel prompt bagging method involving forward and backward thinking, which is superior to majority voting and is beneficial for both feedback and weight calculation in boosting. Extensive experiments demonstrate that our PRE- FER achieves state-of-the-art performance in multiple types of tasks by a significant margin. | 2308.12033#0 | 2308.12033#2 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#2 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | We have made our code pub- licly available1. Introduction Large Language Models (LLMs) have recently flourished across a variety of fields, demonstrating unprecedented abil- ities in myriad of complex tasks (Zhao et al. 2023b; Ouyang et al. 2022). Trained with large-scale web data on massive parameters, LLMs show emergent abilities beyond the orig- inal linguistic competence (Wei et al. 2022a), which perform tremendous versatility in both academia and industry. To elicit the power of pretrained LLMs directly or adapt LLMs to specific domains, various paradigms are proposed, includ- ing prompt engineering (Qiao et al. 2022), p-tuning (Liu et al. 2021), and LoRA finetuning (Hu et al. 2021), etc. Due to the immense scale of the model parameters, finetuning on all or even part of LLMs is costly and time-consuming. To this end, as a simple and effective paradigm, prompt engi- neering explores a fundamentally new way of invoking in- ® ® [ Reine } + }- 2 How to solve issues 1) ia according to the situation? Answer Ground Truth {(#) ,(@)} ® Sick | |Petr} jbo Ff emma nnnnenee----s) Y How to solve? â S) â pe > {We} ' r Be @ LLM ae oeNo ® Rainstorm Different strokes for, different folks. | 2308.12033#1 | 2308.12033#3 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#3 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | Input How to solve the rest? | Hard Examples Figure 1: High-level overview of feedback-reflect-refine paradigm. pt denotes the prompt at the t-th iteration. trinsic knowledge and reasoning ability of LLMs based on a pretrain-prompt-predict manner (Liu et al. 2023). Though promising, the na¨ıve prompting approaches are afflicted by several limitations. As generative language mod- els, LLMsâ output commonly has a large variance. For in- stance, the reasoning logic and predicted results could be contradictory in multiple runs, although the input prompts are fixed. In addition, LLMs suffer from the notoriously hal- lucination issue (Ji et al. 2023), leading to results that are plausible-sounding but factually incorrect or irrelevant to the inputs. Furthermore, the quality of LLMsâ output is suscep- tible to the given prompts, which entails substantial manual effort and domain expertise to find out the reliable prompts. As a promising solution to these issues, prompt ensem- ble learning has attracted substantial interest in the commu- nity very recently, demonstrating significant improvements in both effectiveness and stability across various tasks. As a representative work, PromptBoosting (Hou et al. 2023) applies the traditional ADABOOST (Freund and Schapire 1997) algorithm over a set of pre-defined prompts for text classification. BPE (Pitis et al. 2023) focuses on Chain-of- Thought (CoT) (Wei et al. 2022b) boosting and builds few- shot CoT prompts based on self-consistency (Wang et al. 2022). These efforts empirically demonstrate the strength of prompt ensembles for LLM-based tasks, yielding excep- *This work was done during the internship at Meituan. 1https://github.com/zcrwind/PREFER tional performance gains over single-prompt baselines. However, despite their success, existing prompt ensem- ble approaches, which typically adopt a two-stage process, have several limitations. First, they require a pre-prepared set of prompts in advance, which are either manually de- fined or generated by another language model with heavy parameters. This preliminary work is costly and laborious, often involving a trial-and-error or pre-evaluation process to ensure the quality of pre-defined prompts. | 2308.12033#2 | 2308.12033#4 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#4 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | Second, the two- stage paradigm fixes the prompts to be used in the ensemble process, limiting the adaptability and scalability of prompt boosting, as the prompts cannot be optimized jointly. Since the relationships between prompts are ignored during the iterative boosting process, the pre-defined prompts tend to be sub-optimal and susceptible. Moreover, existing methods conduct ensembles either in boosting or in bagging individ- ually, neglecting the potential benefits of combining the two worlds to enhance performance. To alleviate the above issues, we advocate that a smarter paradigm for prompt ensemble in the era of LLMs is ex- pected to be automatic, self-adaptive and joint-optimizable. Such paradigm reduces the need for manual effort and do- main expertise, as well as takes prompt relations into consid- eration for directed optimization. Accordingly, we propose a simple, automatic and universal approach called PREFER (PRompt Ensemble learning via Feedback-REflect-Refine), towards a more effective prompt ensemble via utilizing the generative and reflective capabilities that LLMs excel at (Madaan et al. 2023). As shown in Figure 1, our PREFER adopts a feedback-reflect-refine circle for prompt boosting. Concretely speaking, inspired by the fact that weak learn- ers pay more attention to hard examples via weight redis- tribution during boosting, we propose to transfer this hard- sample-oriented weighting into nature language feedback, which returns error information to the LLM for reflection. Hence, considering the reflection information, the LLM per- ceives the inadequacies of existing prompts and is able to generate new prompts to refine them purposefully. Attribute to the feedback-reflect-refine path, the LLM jointly opti- mizes the downstream tasks solving and prompt generation in an automatic manner. Iterating along this path, potential conflict and redundancy among prompts are reduced, which is vital for building a more stable and faster learner. Furthermore, to adequately unleash the ability of each prompt and further enhance the stability during boosting, we propose a bilateral bagging approach, which incor- porates forward and backward thinking for multi-source verification. Specifically, drawing inspiration from human decision-making, wherein uncertain answers are often re- solved through a process of elimination, we instruct the LLM to compute a confidence score for each response and subsequently filter out the most uncertain answers. | 2308.12033#3 | 2308.12033#5 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#5 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | Given the observed tendency of LLMs to overestimate confidence in their predictions (Zhao et al. 2021), our bilateral bag- ging approach assesses the responses from both forward and backward directions, in which the overconfidence bias can be counteracted subtly. The empirical results demonstrate the superiority of our bilateral bagging approach compared to other regular methods such as majority voting in both ef- fectiveness and efficiency. We conduct extensive experiments and in-depth case stud- ies on a number of tasks, including reasoning, topic classifi- cation, hate speech discrimination, etc. The empirical results testify the effectiveness of our PREFER approach. Moreover, PREFER shows superiority in both stability and efficiency compared to existing approaches. We will provide the source code for reproducibility in the supplementary material. Related Work Our work is conceptually related to several subareas of arti- ficial intelligent, including Large Language Models (LLMs), prompt engineering, and prompt ensemble learning. In this section, we briefly review the works in each subarea. Large Language Models Nowadays, Large Language Models (LLMs) have made rev- olutionary progress and posed significant impact on various artificial intelligent community (Zhao et al. 2023b; Ouyang et al. 2022). According to the scale law, LLMs demonstrate unprecedent power (called emergent abilities) with the rapid growth of model parameters and data volume (Wei et al. 2022a). For instance, the most prominent applications in- cluding ChatGPT and GPT-4 (OpenAI 2023) have shown surprising reasoning ability, human-like conversation skills, as well as a rich reserve of factual commonsense. Based on the surprising emergent abilities, a series of classical algo- rithms can evolve to a more intelligent version. In this paper, we provide a pilot work on ensemble algorithm as a prelim- inary study. We believe that our proposed approach could not only simply serve as a strong baseline to foster future research on prompt ensemble, but also shed light on the po- tential research direction towards improving classical algo- rithms with the power of LLMs. | 2308.12033#4 | 2308.12033#6 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#6 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | Prompt Engineering In order to invoke the power of LLMs, a series of ap- proaches have been proposed in the community, including parameter-efficient fine-tuning (Hu et al. 2021; Liu et al. 2021) and prompt engineering (Qiao et al. 2022; Liu et al. 2023), etc. Due to the heavy weight of LLMs, fully or even partly fine-tuning them is expensive and inefficient. Accord- ingly, as an out-of-the-box paradigm, prompt engineering (aka prompting) has emerged as a new approach for adapting pretrain-prompt-predict path for downstream tasks. Tremen- dous cutting-edge effort has been made towards this area to improve the performance of prompting. Concretely, prompt- ing adopts natural language as additional inputs, acting as instructions or hints to LLMs. For example, GPT2 (Rad- ford et al. 2019) allows for unsupervised learning of LLM on multiple tasks through handcrafted task-specific prompts. However, building prompts manually can be expensive, bi- ased and sub-optimal (Liu et al. 2023). Another line of works are devoted to conducting prompting in an automatic way. STaR (Zelikman et al. 2022) utilizes a simple loop to bootstrap LLMs with a self-taught manner, in which Chain- of-Thought (CoT) (Wei et al. 2022b) rationale is iteratively generated to hint the question answering process. | 2308.12033#5 | 2308.12033#7 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#7 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | Closer to Bilateral Bagging ayepdn yybiem Bilateral Prompt Bagging | Boosting {@,|2J} ayepdn yyBiem Feedback For Po, «) succeed, but «7? failed / (Np. How to solve the rest? Pocontains confusing words. Too coarse description 4 No guidance for evidence... | Iteration q cK i @ Prompt Weight 1 1 1 '® Boosting Error 5 7 ' '® Instance Weight ' Figure 2: The pipeline of PREFER. Given the initial prompt p0, LLM partially solves the problem via incorporating backward thinking. Then the error information will be used for prompt optimization through the feedback-reflect-refine process. Iterating this process and finally ensembling prompts based on evolved weights. our work, APO (Pryzant et al. 2023) iteratively optimizes the single prompt in a feedback manner, which treats the textual reflection information as gradient in classical deep learning. Prompt Ensemble Learning Prior studies have proven that LLMs have multiple reason- ing paths for a single problem, which could lead to dis- tinct outputs from identical inputs (Wang et al. 2022). To this end, prompt ensemble learning has been presented as a solution, which combines several individual prompts to ob- tain better stability and generalization performance. Boost- ing and bagging are two typical ensemble methods widely adopted in numerous classical tasks, while their adaptation on LLMs is still in its infancy. Current works for prompt boosting typically utilize a two-stage paradigm. Prompt- Boosting (Hou et al. 2023) has done a preliminary trial on this way, which conducts the traditional ADABOOST (Fre- und and Schapire 1997) algorithm over a pre-defined prompt set for text classification. On the other hand, existing prompt bagging approaches mainly rely on regular majority voting, which can be computationally intensive. Notably, BPE (Pitis et al. 2023) focuses on constructing few-shot CoT prompts based on self-consistency (Wang et al. 2022), which offers better performance than a single prompt in the case of in- troducing exponentially additional computation. In this pa- per, we propose a computation-efficiency prompt bagging approach inspired by the human ethology, which incorpo- rates prompt boosting for further performance improvement. | 2308.12033#6 | 2308.12033#8 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#8 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | # Our PREFER Approach xi â X denotes the input texts and yi â Y denotes the output label. It is noted that an initial prompt p0 is provided as the seed for the subsequent iteration. Instead of requiring any supervised fine-tuning (SFT) or reinforcement learning, our proposed PREFER utilizes out-of-box LLM API (e.g., ChatGPT or GPT-4) as the foundation model M for uni- versality and flexibility. As illustrated in Figure 2, our PRE- FER mainly contains two components, i.e. feedback-driven prompt boosting and bilateral prompt bagging, which will be elaborated in sections below. # Prompt Boosting via Feedback-Reflect-Refine Before delving into the technical details of the proposed prompt boosting approach, we first provide our design principle, based on the thinking about what characteristics should an intelligent prompt boosting have in the era of LLMs. Review that boosting algorithms combine several in- dividual weak learners to obtain better generalization per- formance. Considering the fact that weaker learners are sup- posed to pay more attention to hard samples during boost- ing, we advocate that an intelligent boosting algorithm is expected to understand what problems the previous weak learners cannot solve. That is, instead of building prompts individually, the relation among prompts should be consid- ered for better performance and faster convergence. In an- other vein, to reduce the manual effort, the prompt boost- ing process should be automatic, where each prompt can be constructed without manual intervention. Furthermore, the prompt boosting should be universal and adaptive, for em- powering any prompting-based task with the superiority of ensemble learning seamlessly. Preliminaries In this section, we introduce preliminaries of our PREFER approach, including the problem formulation and the dis- mantling of key components. Considering a reasoning or classification task driven by LLMs, given the training data D;, = U;{(xi,yi)}, the goal of the proposed PREFER is to automatically construct a prompt set P = J, {pz} along with prompt weights LU, {Ax} via LLM-augmented ensemble learning, which can then be utilized cooperatively for the subsequent inference. | 2308.12033#7 | 2308.12033#9 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#9 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | Here Our proposed PREFER embraces all the above design principles, towards a simple, automatic and adaptive prompt ensemble paradigm. Inspired by the classical boosting al- gorithm such as ADABOOST (Freund and Schapire 1997) and iterative prompting algorithms (Pryzant et al. 2023), we adopt an iterative manner to build the prompt set where each prompt is treated as a weak learner. As illustrated in Fig- ure 2, acting as a weak learner, each prompt can only han- dle part of the instance space, where new prompts will be added to expand the solving space by introducing more in- Listing 1: solving prompt # Task Given two sentences, determine whether sentence 2 provides an answer to the question posed by sentence 1. # Output format Explain your reasoning process in one sentence and Answer "Yes" or "No" as the label. # Prediction Sentence 1: {text1} Sentence 2: {text2} Label:[] Listing 2: feedback prompt Iâ m trying to write a Textual Entailment task prompt. My current prompt is: {prompt} But this prompt gets the following examples wrong: {error_info} Give {num_feedbacks} reasons why the prompt could have gotten these examples wrong. Wrap each reason with <START> and <END>. | 2308.12033#8 | 2308.12033#10 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#10 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | formation. Based on the error-ambiguity decomposition of ensemble learning (Opitz and Shavlik 1995), the ensemble error mathematically contains two parts: Eensemble = ¯E â ¯A (1) where ¯E and ¯A respectively denote the average error and the average ambiguity (also called diversity) of individual weak learners. Based on Eq.(1), the ensemble performance is pos- itively correlated with both the accuracy and diversity of weak learners. Considering this requirement, the prompt in each iteration is supposed to focus on the hard examples that the prompts in previous iterations cannot handle. Inspired by the way human reflect and refine for improving performance when tackling difficult tasks, we propose a feedback-reflect- refine pipeline, asking the LLM to consider the relation of prompts in the iteration, generate new informative prompts, and optimize them jointly. Concretely speaking, we define two types of prompt tem- plates, namely the solving prompt and the feedback prompt, which are respectively responsible for solving downstream tasks and conducting the feedback process. Fol- lowing In-Context Learning (ICL) (Dai et al. 2022), we format both types of prompts with the component of the instruction, demonstration and output format. Exemplary cases of these two templates are illustrated in Listing 1 and Listing 2, respectively. Given the initial seed prompt p0 and the corresponding performance, we build the feedback prompt based on the feedback template and the wrong exam- ples. This is reminiscent of the gradient in deep learning op- timization, which indicates the direction of model optimiza- tion, the key difference lies that the feedback form changes from numerical into textual. The feedback prompt will then be fed to the LLM M for self-reflecting, and M provides a series of reasons why the current prompt pt can solve some examples well but not others. Based on the reflection, the LLM is asked to generate new prompts in connection with hard examples specified in the previous iteration. In detail, the sampled wrong examples and corresponding textual la- bels are combined to error info in Listing 2. Mathemat- ically, this feedback-reflect-refine process can be formulated via the Bayesian theory: P(pt|X , Y, ptâ 1) = P(Rt|X , Y, ptâ 1) · P(pt|Rt) | 2308.12033#9 | 2308.12033#11 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#11 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | here Rt denotes the reflection of the LLM M at the t-th iter- ation. It is noted that our PREFER only modifies the instruc- tion of the solving prompt, while other parts remain unchanged. Close to our work, APO (Pryzant et al. 2023) also con- ducts a feedback-based mechanism for prompt optimization. Nevertheless, there are several intrinsic differences between such iterative prompting approach and our PREFER. First, APO aims to search for a single prompt covering the largest possible solution space, while our PREFER organizes a set of prompts via ensemble learning, which works in tandem to cover multiple sub-spaces. Second, our PREFER proposes an effective bagging approach to reduce the variance of the LLM, which is superior to the regular techniques such as beam search or Monte Carlo search in APO. Experimental results demonstrate that our PREFER outperforms APO by a quite large margin with less computational cost and higher stability. Bilateral Prompt Bagging As shown in Eq.(1), the quality and stability of weak learn- ers is essential to the ensemble performance. Due to the generative property of language model, LLMsâ outputs are highly sensitive to the input prompts, which affects the sta- bility of both the feedback and weight calculation process. To alleviate this issue, direct solutions include majority vot- ing or beam search, which is commonly used in the commu- nity (Wang et al. 2022; Li et al. 2023). However, these meth- ods are computationally intensive, especially for LLMs with massive parameters. Accordingly, to enhance the ability and stability of each prompt with limited calculation burden, we further propose a bagging approach called bilateral prompt bagging, which draws inspiration from human behavior of utilizing forward and backward thinking for tackling diffi- cult tasks. Concretely speaking, humans commonly adopt the pro- cess of elimination when they are not sure about the decision making. Inspired by this, we advocate that similar spirits can be utilized in the prompt bagging. | 2308.12033#10 | 2308.12033#12 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#12 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | In each iteration, the LLM M is required to evaluate its answerâ s confidence by utilizing the generated prompt pt followed by a confidence evaluation clause. When the evaluation result is not confi- dent enough, the reverse thinking takes effect via conduct- ing elimination process. In detail, we consider the quantita- tive confidence score evaluation in both forward and back- ward thinking. Take the classification task as an example, in the forward evaluation, M is required to measure the confi- dence that each candidate answer is the correct one. As for the backward evaluation, M is required reversely to measure Algorithm 1: Our PREFER Algorithm Input: Training data Dj, = U;{(ai, yi) }, the LLM M, the seed prompt po, the prompt templates Tzo1ying aNd Tzeeaback Output: the result prompt set P = UL), {p,} and their weights U, {Ac}. the reflection set J, {Ri} U, {Ac}. the reflection set J, {Ri} 1: Set the initial data weight to w i = 1/|Dtr|, â i â {0, · · · , |Dtr|}, P = {p0}. 2: for t = 0 to N do 3: 4: 5: 6: 7: 8: # Generate new pt with {M, reflection Rtâ 1} end if Solve target tasks with {p;, Tso1vings â i } Conduct bilateral bagging Build feedback prompt with {error_info, Treedback } Perform feedback and get the reflection R; Compute weighted error as Eq.(4) Update the weight on p; by Eq.(5) Update the instance weights in D;, by Eq.(6) fol- lowed by re-normalization P=PUp,R=RUR, for return L),{p:}, Ut Ach, U, {Re} 9: 10: 11: 12: | # 13; 14: end for 15: return | 2308.12033#11 | 2308.12033#13 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#13 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | the confidence that each candidate answer is excluded. For notational simplicity, we name the confidence scores corre- sponding to the forward and backward evaluations with S+ and Sâ respectively. After these, the final probability can be calculated via combining S+ and Sâ with a subtractive fashion: (3) gy = arg max, » here Ë y denotes the predicted answer, c and j denote the indexes of candidate answers. It is noted that LLMs tend to evaluate confidence score overconfidently (Zhao et al. 2021), while our proposal ingeniously circumvents this in- adequacy via positive and negative offsets. We believe that such paradigm can also shed light on the community of LLMsâ calibration (Zhao et al. 2023a). Attributed to the introduction of reverse thinking mecha- nism, the accuracy-versus-efficiency dilemma can be largely alleviated for prompt bagging. Experimental results explic- itly manifest that such bilateral bagging outperforms regular methods (e.g., majority voting) in both effectiveness and ef- ficiency. Overall Algorithm To sum up, we conclude the proposed PREFER in Algorithm 1. Basically, our PREFER follows the pipeline of the classical ADABOOST (Freund and Schapire 1997) algorithm, while enhancing it with the feedback- reflect-refine boosting and the bilateral prompt bagging. Both branches can co-adapt and cooperate for automatic prompt set optimization. | 2308.12033#12 | 2308.12033#14 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#14 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | In detail, the weighted ensemble error in the t-th iteration is calculated as: x wl (ys #M (ps, 21) i=l sole! Wi (4) errorâ ) here I is the identify function. Moreover, the weight in each iteration is updated based on the above error information as: 1â error NO = log erortty + 108 (| 1) (5) Finally, the instance weights in training dataset Dtr can be updated by: w= wf) -exp (A Iv AM(pe,21))) ©) | 2308.12033#13 | 2308.12033#15 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#15 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | here Vi â ¬ {0,---,|Dy,|} is the index of training exam- ples. Once the process of Algorithm 1 is complete, opti- mized prompts ), {p;} along with their weights U),{A1} can be obtained, which can then be utilized for application via weighted decision making. Moreover, the intermediate re- flection), {R;} naturally provides abundant interpretability for prompt boosting. # Experiments Experimental Settings Datasets We conduct experiments on a wide range of tasks including natural language inference and classification: â ¢ Natural Language Inference SNLI (Bowman et al. 2015), MNLI (Williams, Nangia, and Bowman 2017), and RTE (Dagan, Glickman, and Magnini 2005): textual entailment inference; QNLI (Rajpurkar et al. 2016): question-answering infer- ence. | 2308.12033#14 | 2308.12033#16 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#16 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | â ¢ Natural Language Classification Ethos (Mollas et al. 2020): hate speech detection; Liar (Wang 2017): fake news classification; ArSarcasm (Farha and Magdy 2020): Arabic sarcasm de- tection. Compared Baselines To manifest the superiority of our PREFER approach, we compare it with several state-of- the-art baselines. As the closest work to our proposal, PromptBoosting (Hou et al. 2023) conducts the traditional ADABOOST algorithm over a pre-defined prompt set for text classification. As a remarkable work of iterative prompting methods, APO (Pryzant et al. 2023) utilizes an iterative man- ner for optimizing a single prompt, where the performance of the previous prompt will be used to form a natural lan- guage â | 2308.12033#15 | 2308.12033#17 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#17 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | gradientâ that guides the prompt optimization. More- over, we also conduct single-prompt and Chain-of-Thought (CoT) enhanced single-prompt experiments, to figure out the superiority of our PREFER compared with vanilla and opti- mized non-iterative prompting works. Lastly, we compare a variant of our PREFER, which rewrites synonymous prompts for boosting instead of feedback-reflect-refine paradigm, for ascertaining the utility of LLMsâ reflective ability. Running settings To make a fair comparison, we closely follow the experimental protocols that were set up in APO with our own data split. In detail, we mainly conduct devel- oping and evaluation of our PREFER in few-shot settings. For each task, we randomly sample k examples from the original training dataset, to build k-shot training set Dtr. By default, the k in this paper is set to 50. We use F1-score for performance evaluation. Datasets SNLI MNLI QNLI RTE Ethos Liar Single Prompt Single Prompt (CoT) Synonym Ensemble PromptBoosting APO APO* Ours 0.587 0.575 0.580 0.619 - - 0.647 0.660 0.685 0.746 0.574 - - 0.767 0.660 0.660 0.720 0.631 - - 0.793 0.720 0.731 0.659 0.673 - - 0.753 0.833 0.804 0.812 - 0.964 0.947 0.963 0.535 0.549 0.572 - 0.663 0.658 0.744 0.511 0.525 0.569 - 0.873 0.639 0.739 # ArSarcasm Table 1: | 2308.12033#16 | 2308.12033#18 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#18 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | Main experimental results of our PREFER and the compared approaches. APO and APO* respectively denote the reported and our reproduced results of the Automatic Prompt Optimization (Pryzant et al. 2023). Bold: best; underline: runner- up (results are based on our reproduction). Method â Feedback â Bagging Voting Ours SNLI MNLI QNLI RTE Ethos Liar Sarcasm 0.580â 0.746 0.720 0.659â 0.812â 0.572â 0.572â 0.640 0.713 0.747 0.740 0.947 0.718 0.653â 0.626 0.733 0.767 0.760 0.938 0.701 0.649â 0.647 0.767 0.793 0.753 0.963 0.744 0.739 Table 2: Experimental results of the ablation study. â indi- cates a severe performance drop (more than 10%). â ours 0.96} â aro 0.94 0.92 @ 0.90 0.88 0.86 0.84 6 i 2 3 a 5 Optimization Step # Figure 3: Training process comparison for APO and ours. # Experimental Results In view of the key proposals in our PREFER approach, we are naturally motivated to ask the following interesting research questions. | 2308.12033#17 | 2308.12033#19 | 2308.12033 | [
"2305.03495"
]
|
2308.12033#19 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | â ¢ RQ1. Is the prompt ensemble learning really useful for improving LLMsâ performance? â ¢ RQ2. Are the feedback-driven boosting and bilateral bagging mechanism both useful for prompt synthesis in ensemble learning? â ¢ RQ3. Is the reason why our proposal is superior to the iterative approaches due to the expansion of the sample space? To explore the second research question, we compare our PREFER with both the two-stage ensemble approach PromptBoosting (Line 4) and the synonym rewriting ensem- ble approach (Line 3). For PromptBoosting, we use the pub- licly available code of (Hou et al. 2023) and conduct ex- periments following its hyperparameter setting. For the syn- onym rewriting ensemble, we conduct prompt rewriting op- eration with same semantics, followed by regular ensemble learning similar to our PREFER. As demonstrated in Table 1, our approach consistently outperforms the two ensemble ap- proaches by a significant margin, reaching around 5% to 35% relative improvement in most datasets. We attribute the superiority of PREFER to its feedback-reflect-refine mecha- nism as well as the design of the joint optimization paradigm that naturally captures relations among weak learners. To figure out the answers to these questions, we conduct sufficient experiments and the experimental results can be found in Table 1. For the first question, we compare the ensemble-based approaches (including PromptBoosting and our PREFER) with the single-prompt-based approaches. As shown in the experimental results, when compared to the vanilla (Line 1) and CoT-enhanced single prompt approach (Line 2), both PromptBoosting and our PREFER outperform them by a significant margin. For example, our PREFER out- performs the second best approach by up to 6.3% for the QNLI dataset, and 13.1% for the Liar dataset. The general trend that becomes apparent from the results in Table 1 is that the more difficult the task is, the better ensemble learn- ing performs. We conjecture that it is due to the feedback- reflect-refine paradigm can achieve greater improvement for the harder tasks, while the marginal gain of this mechanism would be diminishing for easier tasks. It is noted that the experimental results change marginally by adding Chain-of- Thought (CoT) for single-prompt approach. | 2308.12033#18 | 2308.12033#20 | 2308.12033 | [
"2305.03495"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.