id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2309.03852#55 | FLM-101B: An Open LLM and How to Train It with $100K Budget | A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692, 2019. [30] Yiyi Liu, Yequan Wang, Aixin Sun, Xuying Meng, Jing Li, and Jiafeng Guo. A dual-channel framework for sarcasm recognition by detecting sentiment conflict. In Marine Carpuat, Marie- Catherine de Marneffe, and Iván Vladimir Meza Ruà z, editors, Findings of the Association for Computational Linguistics: NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 1670â | 2309.03852#54 | 2309.03852#56 | 2309.03852 | [
"2306.15595"
] |
2309.03852#56 | FLM-101B: An Open LLM and How to Train It with $100K Budget | 1680. Association for Computational Linguistics, 2022. [31] Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam. CoRR, abs/1711.05101, 2017. [32] Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pages 142â 150, 2011. [33] Xuying Meng, Chungang Lin, Yequan Wang, and Yujun Zhang. Netgpt: Generative pretrained transformer for network traffic. CoRR, abs/2304.09513, 2023. [34] Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Hassan Awadallah. Orca: | 2309.03852#55 | 2309.03852#57 | 2309.03852 | [
"2306.15595"
] |
2309.03852#57 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Progressive learning from complex explanation traces of GPT-4. CoRR, abs/2306.02707, 2023. [35] Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, and Matei Zaharia. Efficient large-scale language model training on GPU clusters. CoRR, abs/2104.04473, 2021. [36] OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023. [37] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. | 2309.03852#56 | 2309.03852#58 | 2309.03852 | [
"2306.15595"
] |
2309.03852#58 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Training language models to follow instructions with human feedback. In NeurIPS, 2022. [38] Yanmin Qian, Chao Weng, Xuankai Chang, Shuai Wang, and Dong Yu. Past review, current progress, and challenges ahead on the cocktail party problem. Frontiers Inf. Technol. Electron. Eng., 19(1):40â 63, 2018. [39] Yujia Qin, Jiajie Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. In Findings of the Association for Elle: | 2309.03852#57 | 2309.03852#59 | 2309.03852 | [
"2306.15595"
] |
2309.03852#59 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Efficient lifelong pre-training for emerging data. Computational Linguistics: ACL 2022, pages 2789â 2810, 2022. [40] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018. [41] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. | 2309.03852#58 | 2309.03852#60 | 2309.03852 | [
"2306.15595"
] |
2309.03852#60 | FLM-101B: An Open LLM and How to Train It with $100K Budget | 18 Technical Report of FLM-101B REFERENCES [42] Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, H. Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant M. Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson dâ Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew J. Johnson, Blake A. Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Edward Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. Scaling language models: Methods, analysis & insights from training gopher. CoRR, abs/2112.11446, 2021. [43] Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. | 2309.03852#59 | 2309.03852#61 | 2309.03852 | [
"2306.15595"
] |
2309.03852#61 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Direct preference optimization: Your language model is secretly a reward model. CoRR, abs/2305.18290, 2023. [44] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1â 140:67, 2020. [45] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. | 2309.03852#60 | 2309.03852#62 | 2309.03852 | [
"2306.15595"
] |
2309.03852#62 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1â 140:67, 2020. [46] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimiza- tion towards training A trillion parameter models. CoRR, abs/1910.02054, 2019. [47] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: memory opti- mizations toward training trillion parameter models. In Christine Cuicchi, Irene Qualters, and William T. Kramer, editors, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2020, Virtual Event / Atlanta, Georgia, USA, November 9-19, 2020, page 20. IEEE/ACM, 2020. [48] Xiaozhe Ren, Pingyi Zhou, Xinfan Meng, Xinjing Huang, Yadao Wang, Weichao Wang, Pengfei Li, Xiaoda Zhang, Alexander Podolskiy, Grigory Arshinov, Andrey Bout, Irina Piontkovskaya, Jiansheng Wei, Xin Jiang, Teng Su, Qun Liu, and Jun Yao. Pangu-Σ: Towards trillion parameter language model with sparse heterogeneous computing. CoRR, abs/2303.10845, 2023. | 2309.03852#61 | 2309.03852#63 | 2309.03852 | [
"2306.15595"
] |
2309.03852#63 | FLM-101B: An Open LLM and How to Train It with $100K Budget | [49] Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, and et al. BLOOM: A 176b-parameter open-access multilingual language model. CoRR, abs/2211.05100, 2022. [50] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. | 2309.03852#62 | 2309.03852#64 | 2309.03852 | [
"2306.15595"
] |
2309.03852#64 | FLM-101B: An Open LLM and How to Train It with $100K Budget | [51] Sheng Shen, Pete Walsh, Kurt Keutzer, Jesse Dodge, Matthew E. Peters, and Iz Beltagy. Staged training for transformer language models. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 19893â | 2309.03852#63 | 2309.03852#65 | 2309.03852 | [
"2306.15595"
] |
2309.03852#65 | FLM-101B: An Open LLM and How to Train It with $100K Budget | 19908. PMLR, 2022. 19 Technical Report of FLM-101B REFERENCES [52] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism. CoRR, abs/1909.08053, 2019. [53] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research, 2023. [54] Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. | 2309.03852#64 | 2309.03852#66 | 2309.03852 | [
"2306.15595"
] |
2309.03852#66 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Roformer: Enhanced transformer with rotary position embedding. CoRR, abs/2104.09864, 2021. [55] Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. ERNIE: enhanced representation through knowledge integration. CoRR, abs/1904.09223, 2019. [56] Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary, Xia Song, and Furu Wei. A length-extrapolatable transformer. | 2309.03852#65 | 2309.03852#67 | 2309.03852 | [
"2306.15595"
] |
2309.03852#67 | FLM-101B: An Open LLM and How to Train It with $100K Budget | In Anna Rogers, Jor- dan L. Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 14590â 14604. Association for Computational Linguistics, 2023. [57] InternLM Team. Internlm: a multilingual language model with progressively enhanced ca- pabilities, 2023. https://github.com/InternLM/InternLM-techreport/blob/main/ InternLM.pdf,. [58] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971, 2023. | 2309.03852#66 | 2309.03852#68 | 2309.03852 | [
"2306.15595"
] |
2309.03852#68 | FLM-101B: An Open LLM and How to Train It with $100K Budget | [59] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiao- qing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023. | 2309.03852#67 | 2309.03852#69 | 2309.03852 | [
"2306.15595"
] |
2309.03852#69 | FLM-101B: An Open LLM and How to Train It with $100K Budget | [60] Leslie G. Valiant. A bridging model for parallel computation. Commun. ACM, 33(8):103â 111, aug 1990. [61] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelz- imer, Florence dâ Alché-Buc, Emily B. | 2309.03852#68 | 2309.03852#70 | 2309.03852 | [
"2306.15595"
] |
2309.03852#70 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 3261â 3275, 2019. [62] Peihao Wang, Rameswar Panda, Lucas Torroba Hennigen, Philip Greengard, Leonid Karlinsky, Rogerio Feris, David Daniel Cox, Zhangyang Wang, and Yoon Kim. | 2309.03852#69 | 2309.03852#71 | 2309.03852 | [
"2306.15595"
] |
2309.03852#71 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Learning to grow pretrained models for efficient transformer training. In The Eleventh International Conference on Learning Representations. 20 Technical Report of FLM-101B REFERENCES [63] Shuohuan Wang, Yu Sun, Yang Xiang, Zhihua Wu, Siyu Ding, Weibao Gong, Shikun Feng, Junyuan Shang, Yanbin Zhao, Chao Pang, Jiaxiang Liu, Xuyi Chen, Yuxiang Lu, Weixin Liu, Xi Wang, Yangfan Bai, Qiuliang Chen, Li Zhao, Shiyong Li, Peng Sun, Dianhai Yu, Yanjun Ma, Hao Tian, Hua Wu, Tian Wu, Wei Zeng, Ge Li, Wen Gao, and Haifeng Wang. ERNIE 3.0 titan: | 2309.03852#70 | 2309.03852#72 | 2309.03852 | [
"2306.15595"
] |
2309.03852#72 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Exploring larger-scale knowledge enhanced pre-training for language understanding and generation. CoRR, abs/2112.12731, 2021. [64] Yequan Wang, Xiang Li, Aixin Sun, Xuying Meng, Huaming Liao, and Jiafeng Guo. Cofenet: Context and former-label enhanced net for complicated quotation extraction. In Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, and Seung-Hoon Na, editors, Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, pages 2438â 2449. International Committee on Computational Linguistics, 2022. [65] Yequan Wang, Hengran Zhang, Aixin Sun, and Xuying Meng. CORT: A new baseline for comparative opinion classification by dual prompts. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 7064â | 2309.03852#71 | 2309.03852#73 | 2309.03852 | [
"2306.15595"
] |
2309.03852#73 | FLM-101B: An Open LLM and How to Train It with $100K Budget | 7075. Association for Computational Linguistics, 2022. [66] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instruc- tions. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 13484â 13508. Association for Computational Linguistics, 2023. [67] C Edward Watkins, Vicki L Campbell, Ron Nieberding, and Rebecca Hallmark. Contempo- rary practice of psychological assessment by clinical psychologists. Professional psychology: Research and practice, 26(1):54, 1995. [68] Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. | 2309.03852#72 | 2309.03852#74 | 2309.03852 | [
"2306.15595"
] |
2309.03852#74 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Finetuned language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. [69] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022, 2022. [70] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. | 2309.03852#73 | 2309.03852#75 | 2309.03852 | [
"2306.15595"
] |
2309.03852#75 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS, 2022. [71] Jerry W. Wei, Le Hou, Andrew K. Lampinen, Xiangning Chen, Da Huang, Yi Tay, Xinyun Chen, Yifeng Lu, Denny Zhou, Tengyu Ma, and Quoc V. Le. Symbol tuning improves in-context learning in language models. CoRR, abs/2305.08298, 2023. [72] Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart Van Merriënboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015. [73] Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: | 2309.03852#74 | 2309.03852#76 | 2309.03852 | [
"2306.15595"
] |
2309.03852#76 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Empowering large language models to follow complex instructions. CoRR, abs/2304.12244, 2023. [74] Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, 21 # Technical Report of FLM-101B REFERENCES He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, and Zhenzhong Lan. CLUE: A chinese language understanding evaluation benchmark. In Donia Scott, Núria Bel, and Chengqing Zong, editors, Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 4762â 4772. International Committee on Computational Linguistics, 2020. [75] Greg Yang and Edward J. Hu. | 2309.03852#75 | 2309.03852#77 | 2309.03852 | [
"2306.15595"
] |
2309.03852#77 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Tensor programs IV: feature learning in infinite-width neural networks. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 11727â 11737. PMLR, 2021. [76] Greg Yang, Edward J. Hu, Igor Babuschkin, Szymon Sidor, Xiaodong Liu, David Farhi, Nick Ryder, Jakub Pachocki, Weizhu Chen, and Jianfeng Gao. | 2309.03852#76 | 2309.03852#78 | 2309.03852 | [
"2306.15595"
] |
2309.03852#78 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Tuning large neural networks via zero-shot hyperparameter transfer. In Marcâ Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 17084â 17097, 2021. [77] Yiqun Yao and Yequan Wang. Research without re-search: Maximal update parametrization yields accurate loss prediction across scales. CoRR, abs/2304.06875, 2023. [78] Yiqun Yao, Zheng Zhang, Jing Li, and Yequan Wang. 2x faster language model pre-training via masked structural growth. CoRR, abs/2305.02869, 2023. [79] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. | 2309.03852#77 | 2309.03852#79 | 2309.03852 | [
"2306.15595"
] |
2309.03852#79 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Hellaswag: Can a machine really finish your sentence? In Anna Korhonen, David R. Traum, and Lluà s Mà rquez, editors, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4791â 4800. Association for Computational Linguistics, 2019. [80] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Zhiyuan Liu, Peng Zhang, Yuxiao Dong, and Jie Tang. GLM-130B: an open bilingual In The Eleventh International Conference on Learning Representations, pre-trained model. ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. [81] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona T. Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. OPT: open pre-trained transformer language models. CoRR, abs/2205.01068, 2022. [82] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. A survey of large language models. CoRR, abs/2303.18223, 2023. | 2309.03852#78 | 2309.03852#80 | 2309.03852 | [
"2306.15595"
] |
2309.03852#80 | FLM-101B: An Open LLM and How to Train It with $100K Budget | [83] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena. CoRR, abs/2306.05685, 2023. [84] Terry Yue Zhuo, Zhuang Li, Yujin Huang, Fatemeh Shiri, Weiqing Wang, Gholamreza Haffari, and Yuan-Fang Li. On robustness of prompt-based semantic parsing with large pre-trained language model: An empirical study on codex. In Andreas Vlachos and Isabelle Augenstein, editors, Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2023, Dubrovnik, Croatia, May 2-6, 2023, pages 1090â 1102. Association for Computational Linguistics, 2023. 22 | 2309.03852#79 | 2309.03852 | [
"2306.15595"
] |
|
2309.03409#0 | Large Language Models as Optimizers | 3 2 0 2 c e D 7 ] G L . s c [ 2 v 9 0 4 3 0 . 9 0 3 2 : v i X r a © Google DeepMind # LARGE LANGUAGE MODELS AS OPTIMIZERS Chengrun Yang* Xuezhi Wang Yifeng Lu Hanxiao Liu Quoc V. Le Denny Zhou Xinyun Chen* Google DeepMind Equal contribution # ABSTRACT Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/ google-deepmind/opro. (a) GSM8K (b) BBH movie_recommendation Figure 1: Prompt optimization on GSM8K (Cobbe et al., 2021) and BBH (Suzgun et al., 2022) movie_recommendation. The optimization on GSM8K has pre-trained PaLM 2-L as the scorer and the instruction-tuned PaLM 2-L (denoted PaLM 2-L-IT) as the optimizer; the optimization on BBH movie_recommendation has text-bison as the scorer and PaLM 2-L-IT as the optimizer. Each dot is the average accuracy across all (up to 8) generated instructions in the single step, and the shaded region represents standard deviation. See Section 5 for more details on experimental setup. Table 1: Top instructions with the highest GSM8K zero-shot test accuracies from prompt optimization with different optimizer LLMs. | 2309.03409#1 | 2309.03409 | [
"2205.12548"
] |
|
2309.03409#1 | Large Language Models as Optimizers | All results use the pre-trained PaLM 2-L as the scorer. Source Instruction Baselines (Kojima et al., 2022) (Zhou et al., 2022b) Letâ s think step by step. Letâ s work this out in a step by step way to be sure we have the right answer. (empty string) Ours PaLM 2-L-IT PaLM 2-L gpt-3.5-turbo gpt-4 Take a deep breath and work on this problem step-by-step. Break this down. A little bit of arithmetic and a logical approach will help us quickly arrive at the solution to this problem. Letâ s combine our numerical command and clear thinking to quickly and accurately decipher the answer. Acc 71.8 58.8 34.0 80.2 79.9 78.5 74.5 1 | 2309.03409#0 | 2309.03409#2 | 2309.03409 | [
"2205.12548"
] |
2309.03409#2 | Large Language Models as Optimizers | # Large Language Models as Optimizers 1 # INTRODUCTION Optimization is critical for all areas. Many optimization techniques are iterative: the optimization starts from an initial solution, then iteratively updates the solution to optimize the objective func- tion (Amari, 1993; Qian, 1999; Kingma & Ba, 2015; Bäck & Schwefel, 1993; Rios & Sahinidis, 2013; Reeves, 1993). The optimization algorithm typically needs to be customized for an individual task to deal with the specific challenges posed by the decision space and the performance landscape, especially for derivative-free optimization. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to utilize large language models (LLMs) as optimizers. With the advancement of prompting techniques, LLMs have achieved impressive performance on a variety of domains (Wei et al., 2022; Kojima et al., 2022; Wang et al., 2022; Zhou et al., 2022a; Madaan et al., 2023; Bai et al., 2022; Chen et al., 2023e). Their ability to understand natural language lays out a new possibility for optimization: instead of formally defining the optimization problem and deriving the update step with a programmed solver, we describe the optimization problem in natural language, then instruct the LLM to iteratively generate new solutions based on the problem description and the previously found solutions. Optimization with LLMs enables quick adaptation to different tasks by changing the problem description in the prompt, and the optimization process can be customized by adding instructions to specify the desired properties of the solutions. To demonstrate the potential of LLMs for optimization, we first present case studies on linear regression and the traveling salesman problem, which are two classic optimization problems that underpin many others in mathematical optimization, computer science, and operations research. On small-scale optimization problems, we show that LLMs are able to find good-quality solutions simply through prompting, and sometimes match or surpass hand-designed heuristic algorithms. Next, we demonstrate the ability of LLMs to optimize prompts: the optimization goal is to find a prompt that maximizes the task accuracy. Specifically, we focus on natural language processing tasks where both the task input and output are in text formats. | 2309.03409#1 | 2309.03409#3 | 2309.03409 | [
"2205.12548"
] |
2309.03409#3 | Large Language Models as Optimizers | LLMs are shown to be sensitive to the prompt format (Zhao et al., 2021; Lu et al., 2021; Wei et al., 2023; Madaan & Yazdanbakhsh, 2022); in particular, semantically similar prompts may have drastically different performance (Kojima et al., 2022; Zhou et al., 2022b; Zhang et al., 2023), and the optimal prompt formats can be model-specific and task-specific (Ma et al., 2023; Chen et al., 2023c). Therefore, prompt engineering is often important for LLMs to achieve good performance (Reynolds & McDonell, 2021). However, the large and discrete prompt space makes it challenging for optimization, especially when only API access to the LLM is available. Following prior work on continuous and discrete prompt optimization (Lester et al., 2021; Li & Liang, 2021; Zhou et al., 2022b; Pryzant et al., 2023), we assume a training set is available to compute the training accuracy as the objective value for optimization, and we show in experiments that optimizing the prompt for accuracy on a small training set is sufficient to reach high performance on the test set. The prompt to the LLM serves as a call to the optimizer, and we name it the meta-prompt. | 2309.03409#2 | 2309.03409#4 | 2309.03409 | [
"2205.12548"
] |
2309.03409#4 | Large Language Models as Optimizers | Figure 3 shows an example. The meta-prompt contains two core pieces of information. The first piece is previously generated prompts with their corresponding training accuracies. The second piece is the optimization problem description, which includes several exemplars randomly selected from the training set to exemplify the task of interest. We also provide instructions for the LLM to understand the relationships among different parts and the desired output format. Different from recent work on using LLMs for automatic prompt generation (Zhou et al., 2022b; Pryzant et al., 2023), each optimization step in our work generates new prompts that aim to increase the test accuracy based on a trajectory of previously generated prompts, instead of editing one input prompt according to natural language feedback (Pryzant et al., 2023) or requiring the new prompt to follow the same semantic meaning (Zhou et al., 2022b). Making use of the full optimization trajectory, OPRO enables the LLM to gradually generate new prompts that improve the task accuracy throughout the optimization process, where the initial prompts have low task accuracies. We conduct comprehensive evaluation on several LLMs, including text-bison 1 and Palm 2-L in the PaLM-2 model family (Anil et al., 2023), as well as gpt-3.5-turbo and gpt-4 in the GPT 1Available here: https://cloud.google.com/vertex-ai/docs/generative-ai/learn/ models. | 2309.03409#3 | 2309.03409#5 | 2309.03409 | [
"2205.12548"
] |
2309.03409#5 | Large Language Models as Optimizers | 2 # Large Language Models as Optimizers objective function evaluator rN generated solutions return top solutionsâ when finish meta-prompt LLM as solution-score pairs optimizer task description Figure 2: An overview of the OPRO framework. Given the meta-prompt as the input, the LLM generates new solutions to the objective function, then the new solutions and their scores are added into the meta-prompt for the next optimization step. The meta-prompt contains the solution-score pairs obtained throughout the optimization process, as well as a natural language description of the task and (in prompt optimization) a few exemplars from the task. See Figure 3 for a sample meta-prompt for prompt optimization. model family 2. We optimize prompts on GSM8K (Cobbe et al., 2021) and Big-Bench Hard (Suzgun et al., 2022), which are reasoning benchmarks where prompting techniques have achieved remarkable performance breakthrough (Wei et al., 2022; Kojima et al., 2022; Suzgun et al., 2022). Starting from initial prompts with low task accuracies, we show that all LLMs in our evaluation are able to serve as optimizers, which consistently improve the performance of the generated prompts through iterative optimization until convergence (see Figure 1). In particular, while these LLMs generally produce instructions of different styles (see Table 1), with zero-shot prompting, their best generated instructions match the few-shot chain-of-thought prompting performance when applied to PaLM 2-L (Anil et al., 2023), outperforming the zero-shot performance with human-designed prompts by up to 8% on GSM8K. Additionally, we observe that the OPRO-optimized prompts transfer to other benchmarks of the same domain and also deliver notable performance gain. # 2 OPRO: LLM AS THE OPTIMIZER Figure 2 illustrates the overall framework of OPRO. In each optimization step, the LLM generates candidate solutions to the optimization task based on the optimization problem description and previously evaluated solutions in the meta-prompt. Then the new solutions are evaluated and added to the meta-prompt for the subsequent optimization process. The optimization process terminates when the LLM is unable to propose new solutions with better optimization scores, or a maximum number of optimization steps has reached. We first outline the desired features of LLMs for optimization, then describe the key design choices based on these desirables. | 2309.03409#4 | 2309.03409#6 | 2309.03409 | [
"2205.12548"
] |
2309.03409#6 | Large Language Models as Optimizers | 2.1 DESIRABLES OF OPTIMIZATION BY LLMS Making use of natural language descriptions. The main advantage of LLMs for optimization is their ability of understanding natural language, which allows people to describe their optimization tasks without formal specifications. For instance, in prompt optimization where the goal is to find a prompt that optimizes the task accuracy, the task can be described with a high-level text summary along with input-output examples. Trading off exploration and exploitation. The exploration-exploitation trade-off is a fundamental challenge in optimization, and it is important for LLMs serving as optimizers to balance these two competing goals. This means that the LLM should be able to exploit promising areas of the search | 2309.03409#5 | 2309.03409#7 | 2309.03409 | [
"2205.12548"
] |
2309.03409#7 | Large Language Models as Optimizers | 2Available here: http://openai.com/api/. This work uses gpt-3.5-turbo-0613 and gpt-4-0613. 3 # Large Language Models as Optimizers space where good solutions are already found, while also exploring new regions of the search space so as to not miss potentially better solutions. 2.2 META-PROMPT DESIGN As the input to the LLM that acts as the optimizer, the meta-prompt contains the following two essential parts. Optimization problem description. The first part is the text description of the optimization problem, including the objective function and solution constraints. For example, for prompt optimization, the LLM can be instructed to â generate a new instruction that achieves a higher accuracyâ , and we denote such instructions in the meta-prompt as meta-instructions. We can also provide customized meta-instructions as an informal regularization of the generated solutions, such as â the instruction should be concise and generally applicableâ . Optimization trajectory. | 2309.03409#6 | 2309.03409#8 | 2309.03409 | [
"2205.12548"
] |
2309.03409#8 | Large Language Models as Optimizers | Besides understanding natural language instructions, LLMs are also shown to be able to recognize patterns from in-context demonstrations (Wei et al., 2023; Madaan & Yazdanbakhsh, 2022; Mirchandani et al., 2023). Our meta-prompt makes use of this property and instructs the LLM to leverage the optimization trajectory for generating new solutions. Specifically, the optimization trajectory includes past solutions paired with their optimization scores, sorted in the ascending order. Including optimization trajectory in the meta-prompt allows the LLM to identify similarities of solutions with high scores, encouraging the LLM to build upon existing good solutions to construct potentially better ones without the need of explicitly defining how the solution should be updated. 2.3 SOLUTION GENERATION At the solution generation step, the LLM generates new solutions with the meta-prompt as input. The following are the key optimization challenges we address in this stage. Optimization stability. In the optimization process, not all solutions achieve high scores and monotonically improve over prior ones. Due to the sensitivity of in-context learning to the prompt, LLM output can be drastically affected by low-quality solutions in the input optimization trajectory, especially at the beginning when the solution space has not been adequately explored. This sometimes results in optimization instability and large variance. To improve stability, we prompt the LLM to generate multiple solutions at each optimization step, allowing the LLM to simultaneously explore multiple possibilities and quickly discover promising directions to move forward. Exploration-exploitation trade-off. We tune the LLM sampling temperature to balance between exploration and exploitation. A lower temperature encourages the LLM to exploit the solution space around the previously found solutions and make small adaptations, while a high temperature allows the LLM to more aggressively explore solutions that can be notably different. # 3 MOTIVATING EXAMPLE: MATHEMATICAL OPTIMIZATION We first demonstrate the potential of LLMs in serving as optimizers for mathematical optimization. In particular, we present a case study on linear regression as an example of continuous optimization, and on the Traveling Salesman Problem (TSP) as an example of discrete optimization. On both tasks, we see LLMs properly capture the optimization directions on small-scale problems merely based on the past optimization trajectory provided in the meta-prompt. 3.1 LINEAR REGRESSION In linear regression problems, the goal is to find the linear coefficients that probabilistically best explain the response from the input variables. | 2309.03409#7 | 2309.03409#9 | 2309.03409 | [
"2205.12548"
] |
2309.03409#9 | Large Language Models as Optimizers | We study the setting in which the independent and dependent variables X and y are both one-dimensional and an intercept b is present, so that there are two one-dimensional variables w, b to optimize over. In a synthetic setting, we sample ground truth values for one-dimensional variables wtrue and btrue, and generate 50 data points by y = wtruex + btrue + ϵ, in which x ranges from 1 to 50 and ϵ is the standard Gaussian noise. Our 4 | 2309.03409#8 | 2309.03409#10 | 2309.03409 | [
"2205.12548"
] |
2309.03409#10 | Large Language Models as Optimizers | # Large Language Models as Optimizers Table 2: Linear regression by optimizer LLMs: the mean ± standard deviation of the number of steps and the number of unique (w, b) pairs explored before reaching the global optima. Both w and b start from 5 random starting points in [10, 20]. We use temperature 1.0 for all models. We run each setting 5 times. The starting points are the same across optimizer LLMs but are different across 5 runs, and are grouped by: within the starting region, outside and close to the starting region, and outside and farther from the starting region. Bold numbers indicate the best among three LLMs in each setting. | 2309.03409#9 | 2309.03409#11 | 2309.03409 | [
"2205.12548"
] |
2309.03409#11 | Large Language Models as Optimizers | wtrue btrue number of steps text-bison gpt-3.5-turbo gpt-4 number of unique (w, b) pairs explored text-bison gpt-3.5-turbo gpt-4 15 17 16 3 25 2 36 14 17 10 5 23 30 -1 5.8 ± 2.6 4.0 ± 1.8 3.8 ± 2.2 9.8 ± 2.8 19.6 ± 11.4 31.4 ± 6.3 35.8 ± 6.4 7.6 ± 4.5 12.6 ± 6.0 10.4 ± 5.4 10.8 ± 2.7 26.4 ± 18.3 42.8 ± 9.7 45.4 ± 16.9 4.0 ± 1.5 6.0 ± 3.7 6.2 ± 3.1 12.2 ± 2.0 12.2 ± 3.7 38.0 ± 15.9 50.4 ± 18.8 40.0 ± 12.4 33.4 ± 11.7 30.2 ± 13.4 55.8 ± 16.1 104.0 ± 52.3 126.4 ± 17.7 174.0 ± 28.2 36.0 ± 15.2 53.8 ± 16.9 42.8 ± 16.3 39.6 ± 10.1 78.6 ± 26.2 125.6 ± 21.7 142.2 ± 31.2 17.2 ± 5.1 26.0 ± 10.6 24.2 ± 8.2 33.0 ± 4.0 44.2 ± 8.3 99.0 ± 24.6 116.4 ± 32.7 | 2309.03409#10 | 2309.03409#12 | 2309.03409 | [
"2205.12548"
] |
2309.03409#12 | Large Language Models as Optimizers | optimization starts from 5 randomly sampled (w, b) pairs. In each step, we prompt an instruction- tuned LLM with a meta-prompt that includes the best 20 (w, b) pairs in history and their sorted objective values. The meta-prompt then asks for a new (w, b) pair that further decreases the objective value. A sample meta-prompt is shown in Figure 19 of Appendix C.1. We prompt the meta-prompt 8 times to generate at most 8 new (w, b) pairs in each step to improve optimization stability. Then we evaluate the objective value of the proposed pair and add it to history. We do black-box optimization: the analytic form does not appear in the meta-prompt text. This is because the LLM can often calculate the solution directly from the analytic form. Table 2 summarizes the results with one of the following optimizer LLMs: text-bison, gpt-3.5-turbo, and gpt-4. We study three settings of wtrue and btrue: within the starting region [10, 20] Ã [10, 20], â near outsideâ (each of wtrue and btrue is outside the starting region but the distance is less than 10), and â far outsideâ (each of wtrue and btrue is outside the starting region and the distance is greater than 10). We see: â ¢ The number of unique (w, b) pairs explored by each model is fewer than exhaustive search, indicating these models are able to to do black-box optimization: compare the numbers and propose a descent direction. â ¢ The text-bison and gpt-4 models outperform gpt-3.5-turbo in convergence speed: they arrive at the optima with fewer steps. The gpt-4 model also outperforms in finding the optima with fewer explored unique points. Taking a closer look at the optimization trajectory, we see gpt-4 is the best at proposing a reasonable next step from the history: for example, when the history shows the objective values of (w, b) = (8, 7), (w, b) = (8, 6), and (w, b) = (8, 5) are decreasing, it has a highest chance to propose (w, b) = (8, 4) for evaluation. | 2309.03409#11 | 2309.03409#13 | 2309.03409 | [
"2205.12548"
] |
2309.03409#13 | Large Language Models as Optimizers | â ¢ The problem becomes harder for all models when the ground truth moves farther from the starting region: all models need more explorations and more steps. 3.2 TRAVELING SALESMAN PROBLEM (TSP) Next, we consider the Traveling Salesman Problem (TSP) (Jünger et al., 1995; Gutin & Punnen, 2006), a classical combinatorial optimization problem with numerous algorithms proposed in literature, including heuristic algorithms and solvers (Rosenkrantz et al., 1977; Golden et al., 1980; Optimization et al., 2020; Applegate et al., 2006; Helsgaun, 2017), and approaches based on training deep neural networks (Kool et al., 2019; Deudon et al., 2018; Chen & Tian, 2019; Nazari et al., 2018). Specifically, given a set of n nodes with their coordinates, the TSP task is to find the shortest route that traverses all nodes from the starting node and finally returns to the starting node. Our optimization process with LLMs starts from 5 randomly generated solutions, and each optimiza- tion step produces at most 8 new solutions. We present the meta-prompt in Figure 20 of Appendix C.1. We generate the problem instances by sampling n nodes with both x and y coordinates in [â 100, 100]. We use the Gurobi solver (Optimization et al., 2020) to construct the oracle solutions and compute the optimality gap for all approaches, where the optimality gap is defined as the difference between the 5 # Large Language Models as Optimizers Table 3: Results of the Traveling Salesman Problem (TSP) with different number of nodes n, where each n contains 5 problems. â # stepsâ calculates the mean ± standard error of optimization steps for successful runs that find the optimal solution. â # successesâ counts the number of problems that OPRO results in the optimal solution. When no optimal solution is found for any evaluated problem, the corresponding number of steps is N/A. | 2309.03409#12 | 2309.03409#14 | 2309.03409 | [
"2205.12548"
] |
2309.03409#14 | Large Language Models as Optimizers | n optimality gap (%) # steps (# successes) NN FI text-bison gpt-3.5-turbo gpt-4 text-bison gpt-3.5-turbo gpt-4 10 15 20 50 13.0 ± 1.3 9.4 ± 3.7 16.0± 3.9 19.7 ± 3.1 3.2 ± 1.4 1.2 ± 0.6 0.2± 0.1 9.8 ± 1.5 0.0 ± 0.0 4.4 ± 1.3 30.4 ± 10.6 219.8 ± 13.7 0.0 ± 0.0 1.2 ± 1.1 4.4 ± 2.5 133.0 ± 6.8 0.0 ± 0.0 0.2 ± 0.2 1.4 ± 0.6 11.0 ± 2.6 40.4 ± 5.6 (5) N/A (0) N/A (0) N/A (0) 46.8 ± 9.3 (5) 202.0 ± 41.1 (4) 438.0 ± 0.0 (1) N/A (0) 9.6 ± 3.0 (5) 58.5 ± 29.0 (4) 195.5 ± 127.6 (2) N/A (0) distance in the solution constructed by the evaluated approach and the distance achieved by the oracle solution, divided by the distance of the oracle solution. Besides evaluating OPRO with different LLMs including text-bison, gpt-3.5-turbo and gpt-4, we also compare OPRO to the following heuristics: | 2309.03409#13 | 2309.03409#15 | 2309.03409 | [
"2205.12548"
] |
2309.03409#15 | Large Language Models as Optimizers | â ¢ Nearest Neighbor (NN). Starting from an initial node, the solution is constructed with the nearest neighbor heuristic: At each step, among the remaining nodes that are not included in the current partial solution, NN selects the node with the shortest distance to the end node of the partial solution, and adds it as the new end node. The process finishes when all nodes have been added to the solution. â ¢ Farthest Insertion (FI). One caveat of the nearest neighbor heuristic is that it does not take the distance between the start and end node into consideration when constructing partial solutions. To address this issue, FI aims to optimize the cost of inserting new nodes into the partial solution at each step. Define the minimal insertion cost of adding a new node k as c(k) = min(i,j) d(i, k) + d(k, j) â d(i, j), where i and j are adjacent nodes in the current tour, and d(·, ·) represents the distance between two nodes. At each step, FI adds a new node that maximizes the minimal insertion cost. | 2309.03409#14 | 2309.03409#16 | 2309.03409 | [
"2205.12548"
] |
2309.03409#16 | Large Language Models as Optimizers | We present the results in Table 3. We randomly generate 5 problem instances for each number of nodes n. In addition to measuring the optimality gap, on problems where the LLM finds the optimal solutions, we also show the number of optimization steps taken to reach the global optimum. First, we observe that gpt-4 significantly outperforms gpt-3.5-turbo and text-bison across all problem sizes. Specifically, on smaller-scale problems, gpt-4 reaches the global optimum about 4Ã faster than other LLMs. On larger-scale problems, especially with n = 50, gpt-4 still finds solutions with a comparable quality to heuristic algorithms, while both text-bison and gpt-3.5-turbo get stuck at local optima with up to 20Ã worse optimality gaps. On the other hand, the performance of OPRO degrades dramatically on problems with larger sizes. When n = 10, all LLMs find the optimal solutions for every evaluated problem; as the problem size gets larger, the OPRO optimality gaps increase quickly, and the farthest insertion heuristic starts to outperform all LLMs in the optimality gap. Limitations. We would like to note that OPRO is designed for neither outperforming the state- of-the-art gradient-based optimization algorithms for continuous mathematical optimization, nor surpassing the performance of specialized solvers for classical combinatorial optimization problems such as TSP. Instead, the goal is to demonstrate that LLMs are able to optimize different kinds of objective functions simply through prompting, and reach the global optimum for some small- scale problems. Our evaluation reveals several limitations of OPRO for mathematical optimization. Specifically, the length limit of the LLM context window makes it hard to fit large-scale optimization problem descriptions in the prompt, e.g., linear regression with high-dimensional data, and traveling salesman problems with a large set of nodes to visit. In addition, the optimization landscape of some objective functions are too bumpy for the LLM to propose a correct descending direction, causing the optimization to get stuck halfway. We further elaborate our observed failure cases in Appendix A. | 2309.03409#15 | 2309.03409#17 | 2309.03409 | [
"2205.12548"
] |
2309.03409#17 | Large Language Models as Optimizers | 6 # Large Language Models as Optimizers I have some texts along with their corresponding scores. The texts are arranged in ascending order based on their scores, where higher scores indicate better quality. text: Letâ s figure it out! score: 61 text: Letâ s solve the problem. score: 63 (. . . more instructions and scores . . . ) The following exemplars show how to apply your text: you replace <INS> in each input with your text, then read the input and give an output. We say your output is wrong if your output is different from the given output, and we say your output is correct if they are the same. input: Q: | 2309.03409#16 | 2309.03409#18 | 2309.03409 | [
"2205.12548"
] |
2309.03409#18 | Large Language Models as Optimizers | Alannah, Beatrix, and Queen are preparing for the new school year and have been given books by their parents. Alannah has 20 more books than Beatrix. Queen has 1/5 times more books than Alannah. If Beatrix has 30 books, how many books do the three have together? A: <INS> output: 140 (. . . more exemplars . . . ) Write your new text that is different from the old ones and has a score as high as possible. Write the text in square brackets. Figure 3: An example of the meta-prompt for prompt optimization with instruction-tuned PaLM 2-L (PaLM 2-L-IT) on GSM8K, where the generated instruction will be prepended to the beginning of â A:â in the scorer LLM output (A_begin in Section 4.1). <INS> denotes the position where the generated instruction will be added. The blue text contains solution-score pairs; the purple text describes the optimization task and output format; the orange text are meta-instructions. # 4 APPLICATION: PROMPT OPTIMIZATION Next, we demonstrate the effectiveness of OPRO on prompt optimization, where the objective is to find the prompt that maximizes task accuracy. We first introduce the problem setup, then illustrate the meta-prompt design. 4.1 PROBLEM SETUP We focus on prompt optimization for natural language tasks, where both the input and output are in the text format. The task is represented as a dataset with training and test splits, where the training set is used to calculate the training accuracy as the objective value during the optimization process, and we compute the test accuracy on the test set after the optimization finishes. While traditional optimization often requires a decently large training set, our experiment shows that a small number or fraction of training samples (e.g., 3.5% of the training set for GSM8K (Cobbe et al., 2021), 20% for Big-Bench Hard (Suzgun et al., 2022)) is sufficient. The objective function evaluator is an LLM to which the optimized prompt will be applied, and it can be the same or different from the LLM for optimization. We denote the LLM for objective function evaluation as the scorer LLM, and the LLM for optimization as the optimizer LLM. | 2309.03409#17 | 2309.03409#19 | 2309.03409 | [
"2205.12548"
] |
2309.03409#19 | Large Language Models as Optimizers | 7 # Large Language Models as Optimizers The output of the optimizer LLM is an instruction, which is concatenated to the question part of every exemplar and prompts the scorer LLM. We consider the following positions to insert the instruction: Q_begin: the instruction is added before the original question. â ¢ Q_end: the instruction is added after the original question. â ¢ A_begin: the instruction is added to the beginning of the scorer LLM output. This is applicable to pretrained LLMs without instruction tuning, where the prompt is formatted as a sequence of QA pairs. We exemplify these prompting formats in Appendix B. 4.2 META-PROMPT DESIGN Figure 3 shows an example of the meta-prompt for prompt optimization on GSM8K (Cobbe et al., 2021). More details are as follows. Optimization problem examples. The problem description includes a few examples taken from the training set to demonstrate the task for the generated instructions. For example, from the input-output pair in Figure 3, we can infer this is a math word problem. The input-output pair also demonstrates the position where the generated instruction will be added to, and this is essential for the optimizer LLM to generate instructions of the same style. In each optimization step, we add several (three for example) training examples to the meta-prompt by random sampling the training set or choose the ones the previous instructions fall short of. Optimization trajectory. The optimization trajectory includes instructions generated from the past optimization steps, along with their scores. The old instructions and scores are sorted by the score in ascending order. The score is the training accuracy in prompt optimization. We only keep instructions with the highest scores in the meta-prompt in consideration of the LLM context length limit. Meta-instructions. We also add meta-instructions: the instructions to the optimizer LLM that explain the optimization goal and instruct the model how to use the above information. The meta-instructions may also specify the desired generated instruction format for easier parsing. # 5 PROMPT OPTIMIZATION EXPERIMENTS We present the evaluation results for prompt optimization in this section. Our experiments demonstrate that OPRO brings a significant performance gain across the board, with different combinations of LLMs as the optimizer and the scorer. 5.1 EVALUATION SETUP Models. The LLMs we use as the optimizer and the scorer are: â ¢ Optimizer LLM: | 2309.03409#18 | 2309.03409#20 | 2309.03409 | [
"2205.12548"
] |
2309.03409#20 | Large Language Models as Optimizers | Pre-trained PaLM 2-L (Anil et al., 2023), instruction-tuned PaLM 2-L (denoted PaLM 2-L-IT), text-bison, gpt-3.5-turbo, and gpt-4. Scorer LLM: Pre-trained PaLM 2-L and text-bison. With pre-trained PaLM 2-L as the scorer, the optimizer LLM generates A_begin instructions. Since text-bison has been instruction-tuned, the optimizer LLM generates Q_begin and Q_end instructions when text-bison is used as the scorer. Benchmarks. Our primary evaluation benchmarks are GSM8K (Cobbe et al., 2021) and Big-Bench Hard (BBH) (Suzgun et al., 2022). GSM8K is a benchmark of grade school math word problems with 7,473 training samples and 1,319 test samples, where chain-of-thought prompting (Wei et al., 2022) and the zero-shot instruction â | 2309.03409#19 | 2309.03409#21 | 2309.03409 | [
"2205.12548"
] |
2309.03409#21 | Large Language Models as Optimizers | Letâ s think step by step.â (Kojima et al., 2022) have drastically improved the performance over the standard prompting. BBH is a suite of 23 challenging BIG-Bench tasks (Srivastava et al., 2022) that covers a wide range of topics beyond arithmetic reasoning, including symbolic manipulation and commonsense reasoning. Each task contains up to 250 examples in total. 8 # Large Language Models as Optimizers To examine the transferability of the optimized instructions, we also evaluate the instructions op- timized for GSM8K on two other mathematical reasoning datasets, i.e., MultiArith (Roy & Roth, 2016) and AQuA (Ling et al., 2017). Implementation details. We set the temperature to be 0 when evaluating the performance of generated instructions, in which case the scorer LLM greedily decodes. Unless otherwise specified, we set the default temperature to be 1.0 for optimizer LLMs to generate diverse and creative instructions. At each optimization step, we prompt the optimizer LLM with the meta-prompt 8 times to generate 8 instructions, then we add these instructions with their training scores to the optimization trajectory in the meta-prompt. Our meta-prompt at each step contains the best 20 instructions so far and 3 randomly picked exemplars from the training set. We study the effect of different hyperparameters in ablation studies (Section 5.3). Appendix C.2 presents the full meta-prompts for different optimizer LLMs. 5.2 MAIN RESULTS We show prompt optimization curves on GSM8K and two BBH tasks in this section. The curves on other BBH tasks are deferred to Appendix D, and the tables containing all accuracy numbers are in Appendix E. # 5.2.1 GSM8K For prompt optimization, we randomly sample 3.5% examples from the GSM8K training set. The same subset is used throughout optimization, so that the task accuracies computed at intermediate optimization steps are approximations of the training accuracy on all 7,473 training examples. This balances the evaluation cost with the generalization performance. After the optimization procedure finishes, we evaluate the found instructions on the entire GSM8K test set. | 2309.03409#20 | 2309.03409#22 | 2309.03409 | [
"2205.12548"
] |
2309.03409#22 | Large Language Models as Optimizers | Figure 1(a) in Section 1 shows prompt optimization curves with pre-trained PaLM 2-L as scorer and PaLM 2-L-IT as optimizer, and the initial instruction is â Letâ s solve the problemâ with a (approximated, and same below) training accuracy of 60.5. We observe that the optimization curve shows an overall upward trend with several leaps throughout the optimization process, for example: â ¢ â Letâ s think carefully about the problem and solve it together.â | 2309.03409#21 | 2309.03409#23 | 2309.03409 | [
"2205.12548"
] |
2309.03409#23 | Large Language Models as Optimizers | at Step 2 with the training accuracy 63.2; â Letâ s break it down!â at Step 4 with training accuracy 71.3; â ¢ â Letâ s calculate our way to the solution!â at Step 5 with training accuracy 73.9; â ¢ â Letâ s do the math!â at Step 6 with training accuracy 78.2. The optimization curves also generally show a decrease of the variance among the accuracies of instructions generated at each step, indicating that the optimizer LLM generates distributionally better instructions throughout the optimization. Next, we present the results of generating Q_begin instructions with the text-bison scorer and the PaLM 2-L-IT optimizer, starting from an empty instruction with a 57.1 training accuracy. The optimization curve in Figure 4(a) shows a similar upward trend, during which a few leaps in the training accuracy include: | 2309.03409#22 | 2309.03409#24 | 2309.03409 | [
"2205.12548"
] |
2309.03409#24 | Large Language Models as Optimizers | â ¢ â Solve the following problems using the given information.â at Step 2 with training accuracy 59.8; â ¢ â Solve the following problems by applying the given information and using the appropriate mathematical operations.â at Step 3 with training accuracy 64.0; â ¢ â Letâ s read the problem carefully and identify the given information. Then, we can create an equation and solve for the unknown variable.â at Step 4 with training accuracy 67.0; â ¢ â Iâ | 2309.03409#23 | 2309.03409#25 | 2309.03409 | [
"2205.12548"
] |
2309.03409#25 | Large Language Models as Optimizers | m always down for solving a math word problem together. Just give me a moment to read and understand the problem. Then, Iâ ll create an equation that models the problem, which Iâ ll solve for the unknown variable. I also may or may not use some helpful diagrams or visuals to understand the problem. Lastly, be sure to allow me some time to carefully check my work before submitting any responses!â at Step 29 with training accuracy 70.1. 9 # Large Language Models as Optimizers Table 4: Test accuracies on GSM8K. We show the instruction with the highest test accuracy for each scorer-optimizer pair. Baselines PaLM 2-L PaLM 2-L PaLM 2-L (Kojima et al., 2022) (Zhou et al., 2022b) A_begin A_begin A_begin Letâ s think step by step. Letâ s work this out in a step by step way to be sure we have the right answer. Letâ s solve the problem. PaLM 2-L A_begin (empty string) text-bison text-bison text-bison (Kojima et al., 2022) (Zhou et al., 2022b) Q_begin Q_begin Q_begin Letâ s think step by step. Letâ s work this out in a step by step way to be sure we have the right answer. Letâ s solve the problem. text-bison Q_begin (empty string) Ours PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L text-bison PaLM 2-L-IT PaLM 2-L A_begin A_begin gpt-3.5-turbo A_begin gpt-4 A_begin PaLM 2-L-IT Q_begin Take a deep breath and work on this problem step-by-step. Break this down. A little bit of arithmetic and a logical approach will help us quickly arrive at the solution to this problem. Letâ s combine our numerical command and clear thinking to quickly and accurately decipher the answer. Letâ s work together to solve math word problems! First, we will read and discuss the problem together to make sure we understand it. Then, we will work together to find the solution. I will give you hints and help you work through the problem if you get stuck. | 2309.03409#24 | 2309.03409#26 | 2309.03409 | [
"2205.12548"
] |
2309.03409#26 | Large Language Models as Optimizers | Letâ s work through this problem step-by-step: text-bison text-bison text-bison text-bison Q_end gpt-3.5-turbo Q_end gpt-4 Q_begin 71.8 58.8 60.8 34.0 64.4 65.6 59.1 56.8 80.2 79.9 78.5 74.5 64.4 68.5 66.5 62.7 Note that although our default setting is to run OPRO for 200 steps in prompt optimization, we need much fewer steps if the goal is to find some outstanding instructions. An example is that the Figure 1(a) experiment found â | 2309.03409#25 | 2309.03409#27 | 2309.03409 | [
"2205.12548"
] |
2309.03409#27 | Large Language Models as Optimizers | Letâ s do the math!â at Step 6 with training accuracy 78.2, almost matching the â Take a deep breath and work on this problem step-by-step.â found at the 107th step with training accuracy 80.2, at a point where the optimization curve is still trending upwards. This is because a leap in our optimization curve does not always correspond to a much better instruction being discovered; instead, it can be due to a large qualitative improvement of all 8 generated instructions in this step. The latter usually happens several steps after the former: after a much better instruction is discovered in one step, the meta-prompt gradually gets rid of worse instructions in the latter steps by generating instructions similar to the much-better one. The top instructions kept in the meta-prompt gradually improves in this procedure. At a point when the meta-prompt only triggers higher quality instructions, the leap happens. Finally, Figure 4(b) shows that the pre-trained PaLM 2-L can also serve as the optimizer LLM and improve its own prediction performance. Different from other optimizer LLMs that are instruction- tuned, the pre-trained PaLM 2-L performs better when the prompt is formatted in a few-shot manner. Therefore, we include two initial instructions to start the optimization: the empty instruction (with a training accuracy 32.2) and â The answer isâ (with a training accuracy 33.3). See Figure 21 in 10 | 2309.03409#26 | 2309.03409#28 | 2309.03409 | [
"2205.12548"
] |
2309.03409#28 | Large Language Models as Optimizers | # Large Language Models as Optimizers (a) PaLM 2-L-IT optimizer (b) pre-trained PaLM 2-L optimizer Figure 4: Prompt optimization on GSM8K with (a) the text-bison scorer and the PaLM 2-L-IT optimizer, and (b) pre-trained PaLM 2-L as both scorer and optimizer. Appendix C for the meta-prompt format. The generated instructions follow the same style as â The answer isâ : most instructions are also phrases suitable as the prefix of a sentence, like â Here you go:â (generated at Step 11 with training accuracy 61.3) and â Letâ s do it:â | 2309.03409#27 | 2309.03409#29 | 2309.03409 | [
"2205.12548"
] |
2309.03409#29 | Large Language Models as Optimizers | (generated at Step 13 with training accuracy 75.1). Table 4 summarizes top instructions found on GSM8K with different scorer and optimizer LLMs. We observe that: â ¢ The styles of instructions found by different optimizer LLMs vary a lot: PaLM 2-L-IT and text-bison ones are concise, while GPT ones are long and detailed. â ¢ Although some top instructions contain the â step-by-stepâ phrase, most others achieve a compa- rable or better accuracy with different semantic meanings. 5.2.2 BBH On BBH, the optimization starts from an empty string as the initial instruction by default. The instructions are placed at A_begin when the scorer is PaLM 2-L, and at Q_begin when the scorer is text-bison. For each task, we utilize a subset of 20% examples for prompt optimization, and the rest examples are for testing. We show experimental results on more variants of the instruction position and initialization in Appendix E. Figure 5 visualizes the per-task accuracy difference on all 23 BBH tasks compared to the instruction â | 2309.03409#28 | 2309.03409#30 | 2309.03409 | [
"2205.12548"
] |
2309.03409#30 | Large Language Models as Optimizers | Letâ s think step by step.â (Kojima et al., 2022) and the empty instruction, and we present the concrete accuracies in Table 7 of Appendix E. We show that the instructions found by OPRO outperform â Letâ s think step by step.â on almost all tasks by a large margin: our instructions outperform by over 5% on 19/23 tasks with the PaLM 2-L scorer, and on 15/23 tasks with the text-bison scorer. Our prompt optimization algorithm also improves instructions from the empty starting point by over 5% on most tasks: 20/23 with the PaLM 2-L scorer and 15/23 with the text-bison scorer. Similar to GSM8K, we observe upward trends in optimization curves on almost all BBH tasks, as shown in Figure 6. See Figure 23 and 24 in Appendix D for more curves on other BBH tasks. We next show some examples of instructions found through the course of optimization. On the task ruin_names, starting from the empty instruction (with 64.0 training accuracy), with the text-bison scorer and the PaLM 2-L-IT optimizer, the following instructions are generated: | 2309.03409#29 | 2309.03409#31 | 2309.03409 | [
"2205.12548"
] |
2309.03409#31 | Large Language Models as Optimizers | â ¢ â Consider the following when editing artist or movie names humorously:â at Step 1 with training accuracy 72.0; â ¢ â When making humorous edits of artist or movie names, you can change one or more letters or even create puns by adding new words that sound similar.â at Step 18 with training accuracy 80.0; â ¢ â We can make humorous edits of artist/movie names by changing letters to create new words that are similar in sound but have different meanings. For example, The Police can be changed to The Polite, The Abyss can be changed to Toe Abyss, and Schindlerâ s List can be changed to Schindlerâ s Lost.â | 2309.03409#30 | 2309.03409#32 | 2309.03409 | [
"2205.12548"
] |
2309.03409#32 | Large Language Models as Optimizers | at Step 38 with training accuracy 82.0. 11 # Large Language Models as Optimizers 40 20 i} aouaseyip Adeun20e 60 2 ° + a aouasayip Adeun29e (a) PaLM 2-L scorer, ours minus â Letâ s think step by step.â (b) PaLM 2-L scorer, ours minus empty starting point aouarayip Aveuna9e (c) text-bison scorer, ours minus â Letâ s think step by step.â (d) text-bison scorer, ours minus empty starting point (d) text-bison scorer, ours minus empty starting point (c) text-bison scorer, ours minus â Letâ s think step by step. Figure 5: On 23 BBH tasks, the accuracy differences among instructions found by prompt opti- mization (with the PaLM 2-L-IT optimizer), â Letâ s think step by step.â | 2309.03409#31 | 2309.03409#33 | 2309.03409 | [
"2205.12548"
] |
2309.03409#33 | Large Language Models as Optimizers | , and the empty string (optimization starting point). Although the above instructions are semantically similar, a paraphrase by the optimizer LLM offers a notable accuracy improvement. We further highlight this observation in Section 5.2.3. Below are some instructions generated when performing prompt optimization on temporal_sequences, starting from the empty instruction (with the training accuracy of 64.0): â ¢ â To solve this problem, we need to first identify the time period when the person was not seen doing anything else. Then, we need to check if the place they went to was open during that time | 2309.03409#32 | 2309.03409#34 | 2309.03409 | [
"2205.12548"
] |
2309.03409#34 | Large Language Models as Optimizers | 12 # Large Language Models as Optimizers (a) BBH ruin_names (b) BBH temporal_sequences Figure 6: Training accuracy curves of prompt optimization on BBH ruin_names and tempo- ral_sequences with the text-bison scorer and the PaLM 2-L-IT optimizer. The optimizations start from the empty string. period. If it was, then that is the time period when they could have gone to that place.â at Step 2 with training accuracy 42.0; â ¢ â | 2309.03409#33 | 2309.03409#35 | 2309.03409 | [
"2205.12548"
] |
2309.03409#35 | Large Language Models as Optimizers | To find the time period when a person could have gone to a place, identify the time periods when they were not seen doing anything else and the place was open. If there are multiple time periods that match these criteria, then the person could have gone to the place during any of these time periods.â at Step 18 with training accuracy 54.0; â ¢ â To determine the possible time period when a person went to a place, first identify all the time periods when the person was not seen doing anything else and the place was open. Then, rule out any time periods during which the person was seen doing something else. The remaining time periods are the possible times when the person could have gone to the place.â | 2309.03409#34 | 2309.03409#36 | 2309.03409 | [
"2205.12548"
] |
2309.03409#36 | Large Language Models as Optimizers | at Step 41 with training accuracy 72.0. Table 5 presents the best instructions generated on movie_recommendation, ruin_names, and tem- poral_sequences tasks with different combinations of the optimizer and the scorer LLMs. Again, different optimizer LLMs produce instructions of different styles. See Appendix E for results on more BBH tasks. 5.2.3 SEMANTICALLY SIMILAR INSTRUCTIONS MAY ACHIEVE DRASTICALLY DIFFERENT ACCURACIES One challenge of prompt optimization is the sensitivity of model performance to subtle changes in the instruction. For example, with the PaLM 2-L scorer on the GSM8K test set, â | 2309.03409#35 | 2309.03409#37 | 2309.03409 | [
"2205.12548"
] |
2309.03409#37 | Large Language Models as Optimizers | Letâ s think step by step.â achieves accuracy 71.8, â Letâ s solve the problem together.â has accuracy 60.5, while the accuracy of â Letâ s work together to solve this problem step by step.â is only 49.4, although it is the semantic combination of the two upper instructions. This behavior increases both the variance across single-step instructions and the oscillation during optimization, and motivates us to generate multiple instructions at each step to improve the optimization stability. 5.2.4 TRANSFERABILITY OF FOUND INSTRUCTIONS We assess the transferability of found prompts to different datasets of the same domain, where we evaluate the top instructions found for GSM8K on two more math reasoning benchmarks Multi- Arith (Roy & Roth, 2016) and AQuA (Ling et al., 2017). Table 6 shows that our optimized prompts also outperform baseline prompts with different scorer LLMs on these two benchmarks. 5.3 ABLATION STUDIES We use text-bison as the scorer and PaLM 2-L as the optimizer for all ablation studies. The tasks we evaluate are GSM8K (math reasoning) and BBH sports_understanding (non-math reasoning). Meta-prompt design. The meta-prompt design is crucial in achieving good prompt optimization performance. We investigate the following core design choices: | 2309.03409#36 | 2309.03409#38 | 2309.03409 | [
"2205.12548"
] |
2309.03409#38 | Large Language Models as Optimizers | 13 # Large Language Models as Optimizers Table 5: Top instructions with the highest accuracies found in prompt optimization on BBH movie_recommendation, ruin_names, and temporal_sequences. Scorer Optimizer Instruction position Instruction movie_recommendation PaLM 2-L PaLM 2-L-IT A_begin PaLM 2-L PaLM 2-L PaLM 2-L A_begin gpt-3.5-turbo A_begin Based on your input, I have analyzed the given movies in terms of genre, plot, tone, audience rating, year of release, director, cast, and reviews. I have also taken into account the given options. The movie that is most similar to the given movies in terms of all these factors is: The best film: | 2309.03409#37 | 2309.03409#39 | 2309.03409 | [
"2205.12548"
] |
2309.03409#39 | Large Language Models as Optimizers | Letâ s uncover the perfect movie recommendation from the options provided, ensuring an exceptional cinematic experience together as we select the most captivating and satisfying choice that will keep us thoroughly engaged and immersed until the very end. text-bison PaLM 2-L-IT Q_begin What is the highest-rated movie similar to the given movies, with a similar IMDb rating and released in the same year? text-bison gpt-3.5-turbo Q_begin Based on the movie list provided, carefully consider your preferences and make a well-informed decision. ruin_names PaLM 2-L PaLM 2-L-IT A_begin Which is the funniest pun on the artist or movie name? PaLM 2-L PaLM 2-L PaLM 2-L A_begin gpt-3.5-turbo A_begin Answer for ruin: Prepare to have a side-splittingly funny time as we uncover the most clever and hilarious alternatives for these artist or movie names, challenging your wit to guess the correct one with a burst of creativity, humor, and imaginative twists! text-bison PaLM 2-L-IT Q_begin A humorous edit of an artist or movie name can be created by replacing one or more letters to form a new word or phrase that sounds similar but has a different meaning. The new word or phrase should be relevant to the original word, but it should also be a surprise, which makes the edit funny. For example, the artist or movie name "Rocky" can be changed to "Ricky," and "Schindlerâ s List" can be changed to "Schindlerâ s Lift." Be creative and have fun! text-bison gpt-3.5-turbo Q_begin Choose the option that offers the most clever and humorous alteration of the given artist or movie name. Let your creativity shine and select the answer that will undoubtedly bring a smile to your face! Make sure to think outside the box! temporal_sequences (no PaLM 2-L as scorer results because its training accuracy on empty string is 100.0) text-bison PaLM 2-L-IT Q_begin To determine the time period when a person went to a Acc 90.8 88.4 88.0 91.6 70.8 88.0 83.6 86.8 83.6 75.2 80.4 | 2309.03409#38 | 2309.03409#40 | 2309.03409 | [
"2205.12548"
] |
2309.03409#40 | Large Language Models as Optimizers | Q_begin To determine the time period when a person went to a place, first identify all the time periods when the personâ s whereabouts are unknown. Then, rule out any time periods during which the person was seen doing something else or the place was closed. The remaining time periods are the possible times when the person could have gone to the place. 80.4 text-bison gpt-3.5-turbo Q_begin Identify the optimal time slot for the individual to engage in the mentioned location/activity considering the given sightings and waking up time, taking into account the opening and closing times of the location and the duration of each event. 53.6 | 2309.03409#39 | 2309.03409#41 | 2309.03409 | [
"2205.12548"
] |
2309.03409#41 | Large Language Models as Optimizers | 14 # Large Language Models as Optimizers Table 6: Transferability across datasets: accuracies of top instructions found for GSM8K on Multi- Arith and AQuA. Scorer Source Instruction position Instruction Accuracy MultiArith AQuA Baselines PaLM 2-L PaLM 2-L PaLM 2-L (Kojima et al., 2022) (Zhou et al., 2022b) A_begin A_begin A_begin Letâ s think step by step. Letâ s work this out in a step by step way to be sure we have the right answer. Letâ s solve the problem. 85.7 72.8 87.5 44.9 48.4 44.1 PaLM 2-L A_begin (empty string) 69.3 37.8 text-bison text-bison text-bison (Kojima et al., 2022) (Zhou et al., 2022b) Q_begin Q_begin Q_begin Letâ s think step by step. Letâ s work this out in a step by step way to be sure we have the right answer. Letâ s solve the problem. 92.5 93.7 85.5 31.9 32.3 29.9 text-bison Q_begin (empty string) 82.2 33.5 Ours PaLM 2-L PaLM 2-L-IT on GSM8K A_begin Take a deep breath and work on this problem step-by-step. 95.3 54.3 text-bison PaLM 2-L-IT on GSM8K Q_begin Letâ s work together to solve math word problems! First, we will read and discuss the problem together to make sure we understand it. Then, we will work together to find the solution. I will give you hints and help you work through the problem if you get stuck. 96.8 37.8 _ â ¢ The order of the previous instructions. We compare the following options: (1) from lowest to highest (our default setting); (2) from highest to lowest; (3) random. Figures 7(a) and 7(b) show that the default setting achieves better final accuracies and converges faster. | 2309.03409#40 | 2309.03409#42 | 2309.03409 | [
"2205.12548"
] |
2309.03409#42 | Large Language Models as Optimizers | One hypothesis is that the optimizer LLM output is affected more by the past instructions closer to the end of the meta-prompt. This is consistent with the recency bias observed in Zhao et al. (2021), which states that LLMs are more likely to generate tokens similar to the end of the prompt. â ¢ The effect of instruction scores. In terms of how to present the accuracy scores, we compare three options: (1) rounding the accuracies to integers, which is equivalent to bucketizing the accuracy scores to 100 buckets (our default setting); (2) bucketizing the accuracies to 20 buckets; (3) not showing the accuracies, only showing the instructions in the ascending order. Figures 7(c) and 7(d) show that the accuracy scores assists the optimizer LLM in better understanding the quality difference among previous instructions, and thus the optimizer LLM proposes better new instructions that are similar to the best ones in the input optimization trajectory. | 2309.03409#41 | 2309.03409#43 | 2309.03409 | [
"2205.12548"
] |
2309.03409#43 | Large Language Models as Optimizers | â ¢ The effect of exemplars. We compare three options: (1) showing 3 exemplars from the task (default); (2) showing 10 exemplars from the task; (3) no exemplars. Figures 7(e) and 7(f) show that presenting exemplars in the meta-prompt is critical, as it provides information on what the task looks like and helps the optimizer model phrase new instructions better. However, more exemplars do not necessarily improve the performance, as a few exemplars are usually sufficient to describe the task. In addition, including more exemplars results in a longer meta-prompt with a dominating exemplar part, which may distract the optimizer LLM from other important components like the optimization trajectory. The number of generated instructions per step. Computing a mini-batch of gradients reduces the variance of a stochastic gradient descent procedure. Similarly, generating multiple instructions in each step improves the optimization stability with LLMs. On the other hand, to achieve better performance with a fixed budget for the number of instructions to evaluate, the number of per-step instructions should not be too large, so as to allow more optimization steps to incorporate richer information of past instructions with their accuracies. Taking both aspects into consideration, Figure 8 | 2309.03409#42 | 2309.03409#44 | 2309.03409 | [
"2205.12548"
] |
2309.03409#44 | Large Language Models as Optimizers | 15 # Large Language Models as Optimizers 70.0 5 60.0 g © 0.0 5 0 50 100 150 200 # steps @ ascending (default) @ descending A random 100.0 50.0 : 9.0 (e) 50 100 150 200 # steps @ ascending (default) e descending Aâ random # 5 g o (a) instruction ordering (GSM8K) (b) instruction ordering (BBH sports_understanding) 70.0 > 3 60.0 8 6 0.0 30.0°> 50 100 +150 +200 # steps @ 100 buckets (default) 20 buckets A. no scores 100.0 5 50.0 o 6 e td + 9.0°5 50 100-150-260 # steps @ 100 buckets (default) C2 20 buckets AV no scores # (c) instruction scores (GSM8K) (d) instruction scores (BBH sports_understanding) 70.0 (e) # exemplars (GSM8K) (f) # exemplars (BBH sports_understanding) Figure 7: Ablation studies: how each part of the meta-prompt matters. The dots are the average values across 3 optimization repetitions, and the shaded regions represent standard deviations. | 2309.03409#43 | 2309.03409#45 | 2309.03409 | [
"2205.12548"
] |
2309.03409#45 | Large Language Models as Optimizers | 16 # Large Language Models as Optimizers (a) GSM8K (b) BBH sports_understanding Figure 8: Ablation studies: the number of generated instructions in each step. The dots are the average values across 3 optimization repetitions, and the shaded regions represent standard deviations. The x-axis represents the total number of evaluated instructions through the optimization; e.g., we run 200 optimization steps when sampling 8 instructions in each step, run 400 steps when sampling 4 instructions in each step, etc. (a) GSM8K, text-bison scorer, Q_begin (b) GSM8K, PaLM 2-L scorer, A_begin Figure 9: Ablation studies: the initial instructions for prompt optimization. The dots are the average values across 3 optimization repetitions, and the shaded regions represent standard deviations. compares the optimization performance of sampling 1 / 2 / 4 / 8 (default) / 16 instructions in each step, showing that sampling 8 instructions at each step overall achieves the best performance. Starting point. We study the effect of different initial instructions for prompt optimization. Our default setting is to start from an empty string when the scorer LLM is (instruction-tuned) text-bison, and to start from either the empty string (on BBH tasks) or â | 2309.03409#44 | 2309.03409#46 | 2309.03409 | [
"2205.12548"
] |
2309.03409#46 | Large Language Models as Optimizers | Letâ s solve the problem.â (on GSM8K) with instruction position A_begin when the scorer LLM is the (pre-trained) PaLM 2-L. Figure 9(a) shows the performance of text-bison as the scorer LLM with 3 options of initial instructions: (1) the empty string; (2) â Solve the following problem.â ; or (3) â Solve the following problem.â and â Letâ s solve the problem.â . | 2309.03409#45 | 2309.03409#47 | 2309.03409 | [
"2205.12548"
] |
2309.03409#47 | Large Language Models as Optimizers | We observe that the accuracies do not differ much with different starting points. Interestingly, the styles of the generated instructions are also similar. For example, most of the generated instructions starting from (1) and (2) contain the phrase â solve this problemâ , like â Letâ s work together to solve this problem.â in Step 4 with training accuracy 64.8 from 17 # Large Language Models as Optimizers (a) GSM8K (b) BBH sports_understanding Figure 10: Ablation studies: temperature of the optimizer model. The dots are the average values across 3 optimization repetitions, and the shaded regions represent standard deviations. | 2309.03409#46 | 2309.03409#48 | 2309.03409 | [
"2205.12548"
] |
2309.03409#48 | Large Language Models as Optimizers | (1), and â Letâ s solve the following problems using the given information.â in Step 3 with training accuracy 62.8 from (2). Figure 9(b) presents the results of of PaLM 2-L as the scorer LLM with the following options of initial instructions: (1) â Letâ s solve the problem.â ; (2) the empty string; or (3) â Letâ s think step by step.â . We notice that the performance differs much more with different initial instructions, especially at the beginning of the optimization. Specifically, starting from (1) leads to better generated instructions than (2) in the first 30 steps, while the instructions optimized from both (1) and (2) are worse than (3) throughout. A similar observation holds when using PaLM 2-L as scorer and gpt-3.5-turbo as optimizer for BBH tasks, by comparing the results starting from the empty string (Appendix E.2) and from â | 2309.03409#47 | 2309.03409#49 | 2309.03409 | [
"2205.12548"
] |
2309.03409#49 | Large Language Models as Optimizers | Letâ s solve the problem.â (Appendix E.3). Taking a closer look into the optimization process of (2), we find that although both â solve the problemâ and â step by stepâ show up in generated instructions at Step 5, it takes the optimizer LLM more steps to get rid of worse instructions presented in the meta-prompt when starting from instructions with lower accuracies. Therefore, one direction for future work is to accelerate convergence from weaker starting points. Diversity per step. We evaluate the following temperatures of the optimizer LLM: {0.0, 0.5, 1.0 (default), 1.5, 2.0}. Figure 10 shows the default temperature 1.0 achieves the best performance. Specifically, optimizations with smaller temperatures (0.0 and 0.5) lack exploration and thus creativity, and the optimizer LLM often gets stuck at the same instruction for tens of steps, resulting in flat optimization curves. On the other hand, with larger temperatures (1.5 and 2.0), the optimizer LLM more often ignores the trajectory of previous instructions presented in the meta-prompt and thus lacks exploitation, therefore the optimization curve does not have a steady upward trend. Comparison with one-step instruction generation. Our current iterative procedure runs for multiple steps and generates a new batch of solutions in each step. To validate the importance of leveraging the optimization trajectory for generating new prompts, we compare to a baseline that generates all instructions in a single step without entering into the optimization procedure. We compare these two approaches on GSM8K and BBH sports_understanding with the PaLM 2-L-IT optimizer. For GSM8K the scorer LLM is pre-trained PaLM 2-L and the initial instruction is â Letâ s solve the problemâ , and for BBH sports_understanding the scorer LLM is text-bison and the initial instruction is the empty string. The baseline generates 50 instructions in a single step, thus its meta-prompt only includes task exemplars, the initial instruction with its accuracy, and the same meta-instructions as our full meta-prompt for performing optimization. All the other hyperparameters remain the same. Our results show that this one-step instruction generation performs much worse than our optimization approach. Specifically: (1) On GSM8K, the best instruction among all 50 is still â | 2309.03409#48 | 2309.03409#50 | 2309.03409 | [
"2205.12548"
] |
2309.03409#50 | Large Language Models as Optimizers | Letâ s solve the problemâ , with a 64.4 training accuracy and a 60.8 test accuracy. On the other hand, our approach (corresponding to Figure 1(a) in the main paper) found â Letâ s do the math!â with a 78.2 training accuracy and a 76.3 test accuracy at the 5th step by generating 8 instructions at each step. (2) 18 # Large Language Models as Optimizers 90 70 accuracy â e training â ®- validation 50+ 0 50 100 150 200 # steps 80 accuracy a oO â e training â ®- validation 40} 0 50 100 # steps (a) BBH snarks, PaLM 2-L as scorer, PaLM 2-L-IT as optimizer, starting from â Letâ s solve the problem.â (b) BBH sports_understanding, text-bison as scorer, gpt-3.5-turbo as optimizer, start- ing from the empty string Figure 11: Overfitting analysis. The exemplars are splitted to 1/3 training, 1/3 validation and 1/3 test. We compute the validation accuracy every 3 steps. The training/validation dots are the average training/validation accuracies across 3 optimization repetitions, respectively, and the shaded regions represent standard deviations. Similarly, on BBH sports_understanding, the best instruction among all 50 achieved a 84.0 training accuracy and 80.0 test accuracy. This is again worse than the instruction found by our approach at Step 4, which achieved a 88.0 training accuracy and a 84.5 test accuracy. 5.4 OVERFITTING ANALYSIS IN PROMPT OPTIMIZATION For simplicity, we do not set aside a validation set in our default setting of prompt optimization. We made this decision based on the experiments when a validation set is present. Overfitting may result in training accuracy being much higher than the validation/test accuracy. It is difficult to avoid overfitting, but overfitting is less harmful when each candidate solution (natural language instruction in the prompt optimization context) overfits to a similar extent. In this case, a higher training accuracy solution still achieves a higher validation/test accuracy, and one can adopt solutions with the highest training accuracies as the final result. | 2309.03409#49 | 2309.03409#51 | 2309.03409 | [
"2205.12548"
] |
2309.03409#51 | Large Language Models as Optimizers | Figure 11 shows this is the case for OPRO in prompt optimization: when setting aside a validation set with the same size as the training set, the validation accuracy curves trend up and down alongside the training curves in both prompt optimization settings. Of course, overfitting still occurs in the instructions found by our prompt optimization: in Table 7 and 10, our training accuracies are often 5%-20% higher than our test accuracies, despite that our test and overall accuracies are still mostly higher than human-written counterparts. Setting aside a larger training set and optimizing for fewer steps (early stopping) may help reduce overfitting. 5.5 COMPARISON WITH EVOPROMPT Some concurrent works on prompt optimization propose meta-prompts that explicitly ask the LLM to perform mutation and crossovers of existing prompts (Fernando et al., 2023; Guo et al., 2023). In our evaluation, we compare our approach to the Genetic Algorithm (GA) and Differential Evolution (DE) versions of EvoPrompt (Guo et al., 2023). Specifically, in the GA meta-prompt, given two prompts, the meta-prompt instructs the LLM to cross over the two prompts and generates a new one, then mutates the newly generated prompt to produce the final prompt. DE extends the GA meta-prompt to include more detailed instructions, e.g., asking the LLM to identify different parts between the two given prompts before performing the mutation. This is in contrast with OPRO, which leverages the optimization trajectory including multiple past prompts, instead of only 2 previous prompts. Meanwhile, OPRO also provides the LLM with richer information to facilitate the understanding of the optimization problem, including exemplars and task accuracies of different prompts. Figure 12 presents the results on GSM8K and BBH sports_understanding benchmarks, where we use gpt-3.5-turbo as the optimizer. On GSM8K, the initial instructions of all approaches are â Letâ s 19 | 2309.03409#50 | 2309.03409#52 | 2309.03409 | [
"2205.12548"
] |
2309.03409#52 | Large Language Models as Optimizers | # Large Language Models as Optimizers (a) GSM8K, PaLM 2-L scorer, A_begin (b) BBH sports_understanding, text-bison scorer, Q_begin Figure 12: Comparison with EvoPrompt in prompt optimization. We use the gpt-3.5-turbo optimizer for both experiments. â EvoPrompt (GA)â uses the meta-prompt from Guo et al. (2023), Figure 1; â EvoPrompt (DE)â uses the meta-prompt from Guo et al. (2023), Figure 2. All optimizations in (a) use the pre-trained PaLM 2-L scorer and start from two simple instructions â | 2309.03409#51 | 2309.03409#53 | 2309.03409 | [
"2205.12548"
] |
2309.03409#53 | Large Language Models as Optimizers | Letâ s solve the problem.â and â Here is the answer.â ; all optimizations in (b) use the text-bison scorer and start from two richer (task-specific) instructions â Solve the sports understanding problem.â and â Give me the answer to sports understanding.â . The dots are the average values across 3 optimization repetitions, and the shaded regions represent standard deviations. We use temperature 1.0 for OPRO and temperature 0.5 for EvoPrompt, same as the default settings in respective works. | 2309.03409#52 | 2309.03409#54 | 2309.03409 | [
"2205.12548"
] |
2309.03409#54 | Large Language Models as Optimizers | solve the problem.â and â Here is the answer.â , which are simple and generic. Again, we observe that OPRO performance steadily improves with more optimization steps. On the other hand, both versions of EvoPrompt even degrade the performance on GSM8K. The main reason is because EvoPrompt does not utilize exemplars for prompt optimization, thus it lacks the understanding of the task to optimize for. In this way, EvoPrompt relies on good-quality and task-specific initial prompts to optimize from. Given this observation, we provide more task-specific initial instructions for experiments on BBH sports_understanding, which are â Solve the sports understanding problem.â and â Give me the answer to sports understanding.â | 2309.03409#53 | 2309.03409#55 | 2309.03409 | [
"2205.12548"
] |
2309.03409#55 | Large Language Models as Optimizers | In this case, EvoPrompt (DE) is able to find better prompts than the initial ones, but the optimization curve is less stable than OPRO. This indicates that leveraging the optimization trajectory helps the LLM to identify promising directions to improve existing prompts. # 6 RELATED WORK Prompt optimization. Prior works have developed soft prompt-tuning methods that optimize the prompt represented as task-specific continuous vectors (Lester et al., 2021; Li & Liang, 2021; Liu et al., 2021; Qin & Eisner, 2021), as well as performing discrete prompt optimization by gradient-guided search (Shin et al., 2020; Wen et al., 2023; Gao et al., 2020; Chen et al., 2023d) and reinforcement learning (Deng et al., 2022; Zhang et al., 2023). These approaches become inapplicable when there is only API access to the LLM. Other works designed edit-based approaches for gradient-free prompt optimization (Xu et al., 2022; Prasad et al., 2022), where the editing can be done with human- defined operations (e.g., swapping two phrases) (Prasad et al., 2022) or language models (e.g., back translation) (Xu et al., 2022). Some recent works investigate LLMs for prompt optimization (Zhou et al., 2022b; Pryzant et al., 2023; Xu et al., 2023). Specifically, APE (Zhou et al., 2022b) first uses the LLM to generate initial instructions. Afterwards, APE selects top instructions with the highest accuracies, then prompts the LLM with each individual instruction to generate a semantically similar variant of the initial instruction. APO (Pryzant et al., 2023) in each step instructs the LLM to produce text feedback on how to update an old instruction. Different from edit-based approaches, the optimizer | 2309.03409#54 | 2309.03409#56 | 2309.03409 | [
"2205.12548"
] |
2309.03409#56 | Large Language Models as Optimizers | 20 # Large Language Models as Optimizers LLM in our work directly generates new instructions at each optimization step, and the optimizer LLM is merely asked to improve the task accuracy without being required to imitate past instructions. Compared to Zhou et al. (2022b) and Pryzant et al. (2023), our optimization process incorporates the past generated instructions with their scores in the meta-prompt, enabling the optimizer LLM to discover common patterns of high-quality instructions. Prompting with natural language feedback. A recent line of work investigates approaches to improve the LLM performance by prompting with natural language feedback to revise the model output, which has shown effectiveness in reducing harmful LLM outputs (Bai et al., 2022; Ganguli et al., 2023), improving reasoning (Shinn et al., 2023; Madaan et al., 2023) and code generation performance (Chen et al., 2023e; Olausson et al., 2023; Shinn et al., 2023; Chen et al., 2023b), dialogue applications (Nair et al., 2023; Madaan et al., 2023; Yuan et al., 2023), and so on (Kim et al., 2023; Wang et al., 2023). Specifically, Yuan et al. (2023) develops a human-in-the-loop framework for deriving system-level feedback from a collection of instance-level feedback, which is then used for refining data. In our work, the optimizer LLM utilizes the optimization trajectory in the prompt, which implicitly requires the LLM to summarize the common characteristics among solutions with similar scores. We consider incorporating explicit natural language feedback on generated solutions for later optimization steps as future work. Tuning language models for optimization. Some previous works tune or prompt language models to behave as mutation and crossover operators in evolutionary algorithms. Meyerson et al. (2023) utilizes language models with few-shot exemplars to propose evolutionary cross-overs on tasks such as image and code generation. In Lehman et al. (2022), the large language model trained on code diff generation is used as the mutation operator, and they further design a fine-tuning method to improve performance in the Sodarace domain for robot simulation. | 2309.03409#55 | 2309.03409#57 | 2309.03409 | [
"2205.12548"
] |
2309.03409#57 | Large Language Models as Optimizers | EvoPrompting (Chen et al., 2023a) uses large language models to evolve neural network architectures, where they combine evolutionary search with soft prompt tuning. With respect to taking the trajectory as the input for optimization, OptFormer (Chen et al., 2022) trains a transformer model on large collections of hyperparameter optimization data. On the other hand, our work performs optimization solely by prompting without additional training. # 7 CONCLUSION We embark on employing LLMs as optimizers, where the LLM progressively generates new solutions to optimize an objective function. We first motivate OPRO with linear regression and traveling salesman problems, then proceed to prompt optimization as a concrete application. Our evaluation demonstrates that LLMs have the capacity of gradually improving the generated solutions based on the past optimization trajectory. Interestingly, on small-scale traveling salesman problems, OPRO performs on par with some hand-crafted heuristic algorithms. For prompt optimization, optimized prompts outperform human-designed prompts on GSM8K and Big-Bench Hard by a significant margin, sometimes over 50%. A number of unresolved questions are open for future research on LLMs for optimization. In general, how to reduce the sensitivity to initialization and better balance exploitation with exploration remains a challenge. Specifically, for prompt optimization, one limitation of our current implementation is that the optimizer LLM does not effectively utilize error cases in the training set to infer promising directions to improve the generated instructions. In our experiments, we tried including error cases in the meta-prompt rather than randomly sampling from the training set at each optimization step, but the results are similar, indicating that the error cases alone are not informative enough for the optimizer LLM to grasp the cause of the wrong prediction. Another limitation is that prompt optimization requires a training set to compute the accuracy that guides the optimization process. Currently the training set at least contains tens of samples, so that the optimized prompt does not severely overfit to the training samples. A promising direction is to incorporate richer feedback about the error cases besides the aggregated accuracy, and summarize the key features that distinguish between high-quality and low-quality generated prompts in the optimization trajectory. Such information may inform the optimizer LLM of how to more efficiently improve over the past generated instructions, and potentially further reduce the example set size needed for prompt optimization. | 2309.03409#56 | 2309.03409#58 | 2309.03409 | [
"2205.12548"
] |
2309.03409#58 | Large Language Models as Optimizers | 21 # Large Language Models as Optimizers # ACKNOWLEDGMENTS We thank Daiyi Peng, Jerry Wei, Shuo Chen, Tim Rocktäschel, Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, and Simon Osindero for their valuable feedback, and thank several anonymous reviewers for helpful comments. # REFERENCES Shun-ichi Amari. Backpropagation and stochastic gradient descent method. Neurocomputing, 5(4-5): 185â 196, 1993. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023. David Applegate, Ribert Bixby, Vasek Chvatal, and William Cook. Concorde tsp solver, 2006. Thomas Bäck and Hans-Paul Schwefel. An overview of evolutionary algorithms for parameter optimization. Evolutionary computation, 1(1):1â | 2309.03409#57 | 2309.03409#59 | 2309.03409 | [
"2205.12548"
] |
2309.03409#59 | Large Language Models as Optimizers | 23, 1993. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022. Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large language models as tool makers. arXiv preprint arXiv:2305.17126, 2023. Angelica Chen, David M Dohan, and David R So. | 2309.03409#58 | 2309.03409#60 | 2309.03409 | [
"2205.12548"
] |
2309.03409#60 | Large Language Models as Optimizers | Evoprompting: Language models for code-level neural architecture search. arXiv preprint arXiv:2302.14838, 2023a. Angelica Chen, Jérémy Scheurer, Tomasz Korbak, Jon Ander Campos, Jun Shern Chan, Samuel R Bowman, Kyunghyun Cho, and Ethan Perez. Improving code generation by training with natural language feedback. arXiv preprint arXiv:2303.16749, 2023b. Jiuhai Chen, Lichang Chen, Heng Huang, and Tianyi Zhou. When do you need chain-of-thought prompting for chatgpt? arXiv preprint arXiv:2304.03262, 2023c. Lichang Chen, Jiuhai Chen, Tom Goldstein, Heng Huang, and Tianyi Zhou. Instructzero: Efficient instruction optimization for black-box large language models. arXiv preprint arXiv:2306.03082, 2023d. Xinyun Chen and Yuandong Tian. Learning to perform local rewriting for combinatorial optimization. Advances in Neural Information Processing Systems, 32, 2019. Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to self-debug. arXiv preprint arXiv:2304.05128, 2023e. Yutian Chen, Xingyou Song, Chansoo Lee, Zi Wang, Richard Zhang, David Dohan, Kazuya Kawakami, Greg Kochanski, Arnaud Doucet, Marcâ aurelio Ranzato, et al. | 2309.03409#59 | 2309.03409#61 | 2309.03409 | [
"2205.12548"
] |
2309.03409#61 | Large Language Models as Optimizers | Towards learning universal hyperparameter optimizers with transformers. Advances in Neural Information Process- ing Systems, 35:32053â 32068, 2022. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric P Xing, and Zhiting Hu. | 2309.03409#60 | 2309.03409#62 | 2309.03409 | [
"2205.12548"
] |
2309.03409#62 | Large Language Models as Optimizers | Rlprompt: Optimizing discrete text prompts with reinforcement learning. arXiv preprint arXiv:2205.12548, 2022. Michel Deudon, Pierre Cournut, Alexandre Lacoste, Yossiri Adulyasak, and Louis-Martin Rousseau. Learning heuristics for the tsp by policy gradient. In International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research, pp. 170â 181. Springer, 2018. 22 | 2309.03409#61 | 2309.03409#63 | 2309.03409 | [
"2205.12548"
] |
2309.03409#63 | Large Language Models as Optimizers | # Large Language Models as Optimizers Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rock- täschel. Promptbreeder: Self-referential self-improvement via prompt evolution. arXiv preprint arXiv:2309.16797, 2023. Deep Ganguli, Amanda Askell, Nicholas Schiefer, Thomas Liao, KamilË e LukoÅ¡i¯utË e, Anna Chen, Anna Goldie, Azalia Mirhoseini, Catherine Olsson, Danny Hernandez, et al. | 2309.03409#62 | 2309.03409#64 | 2309.03409 | [
"2205.12548"
] |
2309.03409#64 | Large Language Models as Optimizers | The capacity for moral self-correction in large language models. arXiv preprint arXiv:2302.07459, 2023. Tianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723, 2020. Bruce Golden, Lawrence Bodin, T Doyle, and W Stewart Jr. Approximate traveling salesman algorithms. Operations research, 28(3-part-ii):694â 711, 1980. | 2309.03409#63 | 2309.03409#65 | 2309.03409 | [
"2205.12548"
] |
2309.03409#65 | Large Language Models as Optimizers | Qingyan Guo, Rui Wang, Junliang Guo, Bei Li, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, and Yujiu Yang. Connecting large language models with evolutionary algorithms yields powerful prompt optimizers. arXiv preprint arXiv:2309.08532, 2023. Gregory Gutin and Abraham P Punnen. The traveling salesman problem and its variations, volume 12. Springer Science & Business Media, 2006. | 2309.03409#64 | 2309.03409#66 | 2309.03409 | [
"2205.12548"
] |
2309.03409#66 | Large Language Models as Optimizers | Keld Helsgaun. An extension of the lin-kernighan-helsgaun tsp solver for constrained traveling salesman and vehicle routing problems. Roskilde: Roskilde University, 12, 2017. Michael Jünger, Gerhard Reinelt, and Giovanni Rinaldi. The traveling salesman problem. Handbooks in operations research and management science, 7:225â 330, 1995. Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. | 2309.03409#65 | 2309.03409#67 | 2309.03409 | [
"2205.12548"
] |
2309.03409#67 | Large Language Models as Optimizers | Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916, 2022. Wouter Kool, Herke van Hoof, and Max Welling. Attention, learn to solve routing problems! In International Conference on Learning Representations, 2019. URL https://openreview. net/forum?id=ByxBFsRqYm. Joel Lehman, Jonathan Gordon, Shawn Jain, Kamal Ndousse, Cathy Yeh, and Kenneth O Stanley. Evolution through large models. arXiv preprint arXiv:2206.08896, 2022. Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021. Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190, 2021. | 2309.03409#66 | 2309.03409#68 | 2309.03409 | [
"2205.12548"
] |
2309.03409#68 | Large Language Models as Optimizers | Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale genera- tion: Learning to solve and explain algebraic word problems. arXiv preprint arXiv:1705.04146, 2017. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. Gpt understands, too. arXiv preprint arXiv:2103.10385, 2021. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. arXiv preprint arXiv:2104.08786, 2021. Xiao Ma, Swaroop Mishra, Ahmad Beirami, Alex Beutel, and Jilin Chen. | 2309.03409#67 | 2309.03409#69 | 2309.03409 | [
"2205.12548"
] |
2309.03409#69 | Large Language Models as Optimizers | Letâ s do a thought experiment: Using counterfactuals to improve moral reasoning. arXiv preprint arXiv:2306.14308, 2023. 23 # Large Language Models as Optimizers Aman Madaan and Amir Yazdanbakhsh. Text and patterns: For effective chain of thought, it takes two to tango. arXiv preprint arXiv:2209.07686, 2022. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023. | 2309.03409#68 | 2309.03409#70 | 2309.03409 | [
"2205.12548"
] |
2309.03409#70 | Large Language Models as Optimizers | Elliot Meyerson, Mark J Nelson, Herbie Bradley, Arash Moradi, Amy K Hoover, and Joel Lehman. Language model crossover: Variation through few-shot prompting. arXiv preprint arXiv:2302.12170, 2023. Suvir Mirchandani, Fei Xia, Pete Florence, Brian Ichter, Danny Driess, Montserrat Gonzalez Arenas, Kanishka Rao, Dorsa Sadigh, and Andy Zeng. Large language models as general pattern machines. arXiv preprint arXiv:2307.04721, 2023. Varun Nair, Elliot Schumacher, Geoffrey Tso, and Anitha Kannan. | 2309.03409#69 | 2309.03409#71 | 2309.03409 | [
"2205.12548"
] |
2309.03409#71 | Large Language Models as Optimizers | Dera: Enhancing large language model completions with dialog-enabled resolving agents. arXiv preprint arXiv:2303.17071, 2023. MohammadReza Nazari, Afshin Oroojlooy, Lawrence Snyder, and Martin Takac. Reinforcement learning for solving the vehicle routing problem. In Advances in Neural Information Processing Systems, pp. 9861â 9871, 2018. Theo X Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, and Armando Solar-Lezama. Demystifying gpt self-repair for code generation. arXiv preprint arXiv:2306.09896, 2023. Gurobi Optimization et al. Gurobi optimizer reference manual, 2020. Archiki Prasad, Peter Hase, Xiang Zhou, and Mohit Bansal. Grips: | 2309.03409#70 | 2309.03409#72 | 2309.03409 | [
"2205.12548"
] |
2309.03409#72 | Large Language Models as Optimizers | Gradient-free, edit-based instruction search for prompting large language models. arXiv preprint arXiv:2203.07281, 2022. Reid Pryzant, Dan Iter, Jerry Li, Yin Tat Lee, Chenguang Zhu, and Michael Zeng. Automatic prompt optimization with" gradient descent" and beam search. arXiv preprint arXiv:2305.03495, 2023. Ning Qian. On the momentum term in gradient descent learning algorithms. Neural networks, 12(1): 145â 151, 1999. Guanghui Qin and Jason Eisner. | 2309.03409#71 | 2309.03409#73 | 2309.03409 | [
"2205.12548"
] |
2309.03409#73 | Large Language Models as Optimizers | Learning how to ask: Querying lms with mixtures of soft prompts. arXiv preprint arXiv:2104.06599, 2021. Colin R Reeves. Modern heuristic techniques for combinatorial problems. John Wiley & Sons, Inc., 1993. Laria Reynolds and Kyle McDonell. Prompt programming for large language models: Beyond the few-shot paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1â 7, 2021. | 2309.03409#72 | 2309.03409#74 | 2309.03409 | [
"2205.12548"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.