doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2309.03409
65
5.5 COMPARISON WITH EVOPROMPT Some concurrent works on prompt optimization propose meta-prompts that explicitly ask the LLM to perform mutation and crossovers of existing prompts (Fernando et al., 2023; Guo et al., 2023). In our evaluation, we compare our approach to the Genetic Algorithm (GA) and Differential Evolution (DE) versions of EvoPrompt (Guo et al., 2023). Specifically, in the GA meta-prompt, given two prompts, the meta-prompt instructs the LLM to cross over the two prompts and generates a new one, then mutates the newly generated prompt to produce the final prompt. DE extends the GA meta-prompt to include more detailed instructions, e.g., asking the LLM to identify different parts between the two given prompts before performing the mutation. This is in contrast with OPRO, which leverages the optimization trajectory including multiple past prompts, instead of only 2 previous prompts. Meanwhile, OPRO also provides the LLM with richer information to facilitate the understanding of the optimization problem, including exemplars and task accuracies of different prompts. Figure 12 presents the results on GSM8K and BBH sports_understanding benchmarks, where we use gpt-3.5-turbo as the optimizer. On GSM8K, the initial instructions of all approaches are “Let’s 19
2309.03409#65
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
65
[6] Cheng Chen, Yichun Yin, Lifeng Shang, Xin Jiang, Yujia Qin, Fengyu Wang, Zhi Wang, Xiao Chen, Zhiyuan Liu, and Qun Liu. bert2bert: Towards reusable pretrained language models. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2134–2148. Association for Computational Linguistics, 2022. [7] Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595, 2023. [8] Tianqi Chen, Ian J. Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer. In Yoshua Bengio and Yann LeCun, editors, 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016.
2309.03852#65
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
66
19 # Large Language Models as Optimizers (a) GSM8K, PaLM 2-L scorer, A_begin (b) BBH sports_understanding, text-bison scorer, Q_begin Figure 12: Comparison with EvoPrompt in prompt optimization. We use the gpt-3.5-turbo optimizer for both experiments. “EvoPrompt (GA)” uses the meta-prompt from Guo et al. (2023), Figure 1; “EvoPrompt (DE)” uses the meta-prompt from Guo et al. (2023), Figure 2. All optimizations in (a) use the pre-trained PaLM 2-L scorer and start from two simple instructions “Let’s solve the problem.” and “Here is the answer.”; all optimizations in (b) use the text-bison scorer and start from two richer (task-specific) instructions “Solve the sports understanding problem.” and “Give me the answer to sports understanding.”. The dots are the average values across 3 optimization repetitions, and the shaded regions represent standard deviations. We use temperature 1.0 for OPRO and temperature 0.5 for EvoPrompt, same as the default settings in respective works.
2309.03409#66
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
66
[9] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the AI2 reasoning challenge. CoRR, abs/1803.05457, 2018. [10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171– 4186. Association for Computational Linguistics, 2019. Interactive information extraction by semantic information graph. In Luc De Raedt, editor, Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 4100–4106. ijcai.org, 2022.
2309.03852#66
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
67
solve the problem.” and “Here is the answer.”, which are simple and generic. Again, we observe that OPRO performance steadily improves with more optimization steps. On the other hand, both versions of EvoPrompt even degrade the performance on GSM8K. The main reason is because EvoPrompt does not utilize exemplars for prompt optimization, thus it lacks the understanding of the task to optimize for. In this way, EvoPrompt relies on good-quality and task-specific initial prompts to optimize from. Given this observation, we provide more task-specific initial instructions for experiments on BBH sports_understanding, which are “Solve the sports understanding problem.” and “Give me the answer to sports understanding.” In this case, EvoPrompt (DE) is able to find better prompts than the initial ones, but the optimization curve is less stable than OPRO. This indicates that leveraging the optimization trajectory helps the LLM to identify promising directions to improve existing prompts. # 6 RELATED WORK
2309.03409#67
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
67
[12] Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, Andy Jones, Sam Bowman, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Nelson Elhage, Sheer El Showk, Stanislav Fort, Zac Hatfield-Dodds, Tom Henighan, Danny Hernandez, Tristan Hume, Josh Jacobson, Scott Johnston, Shauna Kravec, Catherine Olsson, Sam Ringer, Eli Tran-Johnson, Dario Amodei, Tom Brown, Nicholas Joseph, Sam McCandlish, Chris Olah, Jared Kaplan, and Jack Clark. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. CoRR, abs/2209.07858, 2022.
2309.03852#67
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
68
Prompt optimization. Prior works have developed soft prompt-tuning methods that optimize the prompt represented as task-specific continuous vectors (Lester et al., 2021; Li & Liang, 2021; Liu et al., 2021; Qin & Eisner, 2021), as well as performing discrete prompt optimization by gradient-guided search (Shin et al., 2020; Wen et al., 2023; Gao et al., 2020; Chen et al., 2023d) and reinforcement learning (Deng et al., 2022; Zhang et al., 2023). These approaches become inapplicable when there is only API access to the LLM. Other works designed edit-based approaches for gradient-free prompt optimization (Xu et al., 2022; Prasad et al., 2022), where the editing can be done with human- defined operations (e.g., swapping two phrases) (Prasad et al., 2022) or language models (e.g., back translation) (Xu et al., 2022). Some recent works investigate LLMs for prompt optimization (Zhou et al., 2022b; Pryzant et al., 2023; Xu et al., 2023). Specifically, APE (Zhou et al., 2022b) first uses the
2309.03409#68
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
68
[13] Amelia Glaese, Nat McAleese, Maja Trebacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin J. Chadwick, Phoebe Thacker, Lucy Campbell- Gillingham, Jonathan Uesato, Po-Sen Huang, Ramona Comanescu, Fan Yang, Abigail See, Sumanth Dathathri, Rory Greig, Charlie Chen, Doug Fritz, Jaume Sanchez Elias, Richard Green, Sona Mokrá, Nicholas Fernando, Boxi Wu, Rachel Foley, Susannah Young, Iason Gabriel, William Isaac, John Mellor, Demis Hassabis, Koray Kavukcuoglu, Lisa Anne Hendricks, and Geoffrey Irving. Improving alignment of dialogue agents via targeted human judgements. CoRR, abs/2209.14375, 2022. 16 Technical Report of FLM-101B REFERENCES [14] Linyuan Gong, Di He, Zhuohan Li, Tao Qin, Liwei Wang, and Tieyan Liu. Efficient training In International conference on machine learning, pages of bert by progressively stacking. 2337–2346. PMLR, 2019.
2309.03852#68
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
69
al., 2022b; Pryzant et al., 2023; Xu et al., 2023). Specifically, APE (Zhou et al., 2022b) first uses the LLM to generate initial instructions. Afterwards, APE selects top instructions with the highest accuracies, then prompts the LLM with each individual instruction to generate a semantically similar variant of the initial instruction. APO (Pryzant et al., 2023) in each step instructs the LLM to produce text feedback on how to update an old instruction. Different from edit-based approaches, the optimizer
2309.03409#69
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
69
[15] Xiaotao Gu, Liyuan Liu, Hongkun Yu, Jing Li, Chen Chen, and Jiawei Han. On the trans- former growth for progressive bert training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5174–5180, 2021. [16] Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, and Dawn Song. The false promise of imitating proprietary llms. CoRR, abs/2305.15717, 2023. [17] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021.
2309.03852#69
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
70
20 # Large Language Models as Optimizers LLM in our work directly generates new instructions at each optimization step, and the optimizer LLM is merely asked to improve the task accuracy without being required to imitate past instructions. Compared to Zhou et al. (2022b) and Pryzant et al. (2023), our optimization process incorporates the past generated instructions with their scores in the meta-prompt, enabling the optimizer LLM to discover common patterns of high-quality instructions.
2309.03409#70
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
70
[18] Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, Scott Gray, Chris Hallacy, Benjamin Mann, Alec Radford, Aditya Ramesh, Nick Ryder, Daniel M. Ziegler, John Schulman, Dario Amodei, and Sam McCandlish. Scaling laws for autoregressive generative modeling. CoRR, abs/2010.14701, 2020. [19] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katherine Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Oriol Vinyals, Jack W. Rae, and Laurent Sifre. An empirical analysis of compute-optimal large language model training. In NeurIPS, 2022.
2309.03852#70
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
71
Prompting with natural language feedback. A recent line of work investigates approaches to improve the LLM performance by prompting with natural language feedback to revise the model output, which has shown effectiveness in reducing harmful LLM outputs (Bai et al., 2022; Ganguli et al., 2023), improving reasoning (Shinn et al., 2023; Madaan et al., 2023) and code generation performance (Chen et al., 2023e; Olausson et al., 2023; Shinn et al., 2023; Chen et al., 2023b), dialogue applications (Nair et al., 2023; Madaan et al., 2023; Yuan et al., 2023), and so on (Kim et al., 2023; Wang et al., 2023). Specifically, Yuan et al. (2023) develops a human-in-the-loop framework for deriving system-level feedback from a collection of instance-level feedback, which is then used for refining data. In our work, the optimizer LLM utilizes the optimization trajectory in the prompt, which implicitly requires the LLM to summarize the common characteristics among solutions with similar scores. We consider incorporating explicit natural language feedback on generated solutions for later optimization steps as future work.
2309.03409#71
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
71
[20] Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. C- eval: A multi-level multi-discipline chinese evaluation suite for foundation models. CoRR, abs/2305.08322, 2023. [21] Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. Spanbert: Improving pre-training by representing and predicting spans. Trans. Assoc. Comput. Linguistics, 8:64–77, 2020.
2309.03852#71
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
72
Tuning language models for optimization. Some previous works tune or prompt language models to behave as mutation and crossover operators in evolutionary algorithms. Meyerson et al. (2023) utilizes language models with few-shot exemplars to propose evolutionary cross-overs on tasks such as image and code generation. In Lehman et al. (2022), the large language model trained on code diff generation is used as the mutation operator, and they further design a fine-tuning method to improve performance in the Sodarace domain for robot simulation. EvoPrompting (Chen et al., 2023a) uses large language models to evolve neural network architectures, where they combine evolutionary search with soft prompt tuning. With respect to taking the trajectory as the input for optimization, OptFormer (Chen et al., 2022) trains a transformer model on large collections of hyperparameter optimization data. On the other hand, our work performs optimization solely by prompting without additional training. # 7 CONCLUSION
2309.03409#72
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
72
[22] Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared Kaplan. Language models (mostly) know what they know. CoRR, abs/2207.05221, 2022. [23] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. CoRR, abs/2001.08361, 2020.
2309.03852#72
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
73
# 7 CONCLUSION We embark on employing LLMs as optimizers, where the LLM progressively generates new solutions to optimize an objective function. We first motivate OPRO with linear regression and traveling salesman problems, then proceed to prompt optimization as a concrete application. Our evaluation demonstrates that LLMs have the capacity of gradually improving the generated solutions based on the past optimization trajectory. Interestingly, on small-scale traveling salesman problems, OPRO performs on par with some hand-crafted heuristic algorithms. For prompt optimization, optimized prompts outperform human-designed prompts on GSM8K and Big-Bench Hard by a significant margin, sometimes over 50%.
2309.03409#73
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
73
[24] Vijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Moham- mad Shoeybi, and Bryan Catanzaro. Reducing activation recomputation in large transformer models, 2022. [25] Xiang Li, Xin Jiang, Xuying Meng, Aixin Sun, and Yequan Wang. Freelm: Fine-tuning-free language model. CoRR, abs/2305.01616, 2023. [26] Hunter Lightman, Vineet Kosaraju, Yura Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. CoRR, abs/2305.20050, 2023. 17 # Technical Report of FLM-101B REFERENCES
2309.03852#73
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
74
A number of unresolved questions are open for future research on LLMs for optimization. In general, how to reduce the sensitivity to initialization and better balance exploitation with exploration remains a challenge. Specifically, for prompt optimization, one limitation of our current implementation is that the optimizer LLM does not effectively utilize error cases in the training set to infer promising directions to improve the generated instructions. In our experiments, we tried including error cases in the meta-prompt rather than randomly sampling from the training set at each optimization step, but the results are similar, indicating that the error cases alone are not informative enough for the optimizer LLM to grasp the cause of the wrong prediction. Another limitation is that prompt optimization requires a training set to compute the accuracy that guides the optimization process. Currently the training set at least contains tens of samples, so that the optimized prompt does not severely overfit to the training samples. A promising direction is to incorporate richer feedback about the error cases besides the aggregated accuracy, and summarize the key features that distinguish between high-quality and low-quality generated prompts in the optimization trajectory. Such information may inform the optimizer LLM of how to more efficiently improve over the past generated instructions, and potentially further reduce the example set size needed for prompt optimization. 21 # Large Language Models as Optimizers # ACKNOWLEDGMENTS
2309.03409#74
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
74
17 # Technical Report of FLM-101B REFERENCES [27] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 3214–3252. Association for Computational Linguistics, 2022. [28] Etai Littwin and Greg Yang. Adaptive optimization in the ∞-width limit. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. [29] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692, 2019.
2309.03852#74
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
75
21 # Large Language Models as Optimizers # ACKNOWLEDGMENTS We thank Daiyi Peng, Jerry Wei, Shuo Chen, Tim Rocktäschel, Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, and Simon Osindero for their valuable feedback, and thank several anonymous reviewers for helpful comments. # REFERENCES Shun-ichi Amari. Backpropagation and stochastic gradient descent method. Neurocomputing, 5(4-5): 185–196, 1993. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023. David Applegate, Ribert Bixby, Vasek Chvatal, and William Cook. Concorde tsp solver, 2006. Thomas Bäck and Hans-Paul Schwefel. An overview of evolutionary algorithms for parameter optimization. Evolutionary computation, 1(1):1–23, 1993.
2309.03409#75
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
75
[30] Yiyi Liu, Yequan Wang, Aixin Sun, Xuying Meng, Jing Li, and Jiafeng Guo. A dual-channel framework for sarcasm recognition by detecting sentiment conflict. In Marine Carpuat, Marie- Catherine de Marneffe, and Iván Vladimir Meza Ruíz, editors, Findings of the Association for Computational Linguistics: NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 1670–1680. Association for Computational Linguistics, 2022. [31] Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam. CoRR, abs/1711.05101, 2017. [32] Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pages 142–150, 2011. [33] Xuying Meng, Chungang Lin, Yequan Wang, and Yujun Zhang. Netgpt: Generative pretrained transformer for network traffic. CoRR, abs/2304.09513, 2023.
2309.03852#75
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
76
Thomas Bäck and Hans-Paul Schwefel. An overview of evolutionary algorithms for parameter optimization. Evolutionary computation, 1(1):1–23, 1993. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022. Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large language models as tool makers. arXiv preprint arXiv:2305.17126, 2023. Angelica Chen, David M Dohan, and David R So. Evoprompting: Language models for code-level neural architecture search. arXiv preprint arXiv:2302.14838, 2023a. Angelica Chen, Jérémy Scheurer, Tomasz Korbak, Jon Ander Campos, Jun Shern Chan, Samuel R Bowman, Kyunghyun Cho, and Ethan Perez. Improving code generation by training with natural language feedback. arXiv preprint arXiv:2303.16749, 2023b.
2309.03409#76
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
76
[34] Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Hassan Awadallah. Orca: Progressive learning from complex explanation traces of GPT-4. CoRR, abs/2306.02707, 2023. [35] Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, and Matei Zaharia. Efficient large-scale language model training on GPU clusters. CoRR, abs/2104.04473, 2021. [36] OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023.
2309.03852#76
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
77
Jiuhai Chen, Lichang Chen, Heng Huang, and Tianyi Zhou. When do you need chain-of-thought prompting for chatgpt? arXiv preprint arXiv:2304.03262, 2023c. Lichang Chen, Jiuhai Chen, Tom Goldstein, Heng Huang, and Tianyi Zhou. Instructzero: Efficient instruction optimization for black-box large language models. arXiv preprint arXiv:2306.03082, 2023d. Xinyun Chen and Yuandong Tian. Learning to perform local rewriting for combinatorial optimization. Advances in Neural Information Processing Systems, 32, 2019. Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to self-debug. arXiv preprint arXiv:2304.05128, 2023e. Yutian Chen, Xingyou Song, Chansoo Lee, Zi Wang, Richard Zhang, David Dohan, Kazuya Kawakami, Greg Kochanski, Arnaud Doucet, Marc’aurelio Ranzato, et al. Towards learning universal hyperparameter optimizers with transformers. Advances in Neural Information Process- ing Systems, 35:32053–32068, 2022.
2309.03409#77
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
77
[36] OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023. [37] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In NeurIPS, 2022. [38] Yanmin Qian, Chao Weng, Xuankai Chang, Shuai Wang, and Dong Yu. Past review, current progress, and challenges ahead on the cocktail party problem. Frontiers Inf. Technol. Electron. Eng., 19(1):40–63, 2018. [39] Yujia Qin, Jiajie Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. In Findings of the Association for Elle: Efficient lifelong pre-training for emerging data. Computational Linguistics: ACL 2022, pages 2789–2810, 2022.
2309.03852#77
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
78
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric P Xing, and Zhiting Hu. Rlprompt: Optimizing discrete text prompts with reinforcement learning. arXiv preprint arXiv:2205.12548, 2022. Michel Deudon, Pierre Cournut, Alexandre Lacoste, Yossiri Adulyasak, and Louis-Martin Rousseau. Learning heuristics for the tsp by policy gradient. In International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research, pp. 170–181. Springer, 2018. 22 # Large Language Models as Optimizers Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rock- täschel. Promptbreeder: Self-referential self-improvement via prompt evolution. arXiv preprint arXiv:2309.16797, 2023.
2309.03409#78
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
79
Deep Ganguli, Amanda Askell, Nicholas Schiefer, Thomas Liao, Kamil˙e Lukoši¯ut˙e, Anna Chen, Anna Goldie, Azalia Mirhoseini, Catherine Olsson, Danny Hernandez, et al. The capacity for moral self-correction in large language models. arXiv preprint arXiv:2302.07459, 2023. Tianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723, 2020. Bruce Golden, Lawrence Bodin, T Doyle, and W Stewart Jr. Approximate traveling salesman algorithms. Operations research, 28(3-part-ii):694–711, 1980. Qingyan Guo, Rui Wang, Junliang Guo, Bei Li, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, and Yujiu Yang. Connecting large language models with evolutionary algorithms yields powerful prompt optimizers. arXiv preprint arXiv:2309.08532, 2023. Gregory Gutin and Abraham P Punnen. The traveling salesman problem and its variations, volume 12. Springer Science & Business Media, 2006.
2309.03409#79
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
79
[42] Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, H. Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant M. Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz,
2309.03852#79
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
80
Gregory Gutin and Abraham P Punnen. The traveling salesman problem and its variations, volume 12. Springer Science & Business Media, 2006. Keld Helsgaun. An extension of the lin-kernighan-helsgaun tsp solver for constrained traveling salesman and vehicle routing problems. Roskilde: Roskilde University, 12, 2017. Michael Jünger, Gerhard Reinelt, and Giovanni Rinaldi. The traveling salesman problem. Handbooks in operations research and management science, 7:225–330, 1995. Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916, 2022. Wouter Kool, Herke van Hoof, and Max Welling. Attention, learn to solve routing problems! In International Conference on Learning Representations, 2019. URL https://openreview. net/forum?id=ByxBFsRqYm.
2309.03409#80
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
81
Joel Lehman, Jonathan Gordon, Shawn Jain, Kamal Ndousse, Cathy Yeh, and Kenneth O Stanley. Evolution through large models. arXiv preprint arXiv:2206.08896, 2022. Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021. Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190, 2021. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale genera- tion: Learning to solve and explain algebraic word problems. arXiv preprint arXiv:1705.04146, 2017. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. Gpt understands, too. arXiv preprint arXiv:2103.10385, 2021.
2309.03409#81
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
81
[43] Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. CoRR, abs/2305.18290, 2023. [44] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1–140:67, 2020. [45] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1–140:67, 2020. [46] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimiza- tion towards training A trillion parameter models. CoRR, abs/1910.02054, 2019.
2309.03852#81
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
82
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. arXiv preprint arXiv:2104.08786, 2021. Xiao Ma, Swaroop Mishra, Ahmad Beirami, Alex Beutel, and Jilin Chen. Let’s do a thought experiment: Using counterfactuals to improve moral reasoning. arXiv preprint arXiv:2306.14308, 2023. 23 # Large Language Models as Optimizers Aman Madaan and Amir Yazdanbakhsh. Text and patterns: For effective chain of thought, it takes two to tango. arXiv preprint arXiv:2209.07686, 2022. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023.
2309.03409#82
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
82
[47] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: memory opti- mizations toward training trillion parameter models. In Christine Cuicchi, Irene Qualters, and William T. Kramer, editors, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2020, Virtual Event / Atlanta, Georgia, USA, November 9-19, 2020, page 20. IEEE/ACM, 2020. [48] Xiaozhe Ren, Pingyi Zhou, Xinfan Meng, Xinjing Huang, Yadao Wang, Weichao Wang, Pengfei Li, Xiaoda Zhang, Alexander Podolskiy, Grigory Arshinov, Andrey Bout, Irina Piontkovskaya, Jiansheng Wei, Xin Jiang, Teng Su, Qun Liu, and Jun Yao. Pangu-Σ: Towards trillion parameter language model with sparse heterogeneous computing. CoRR, abs/2303.10845, 2023.
2309.03852#82
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
83
Elliot Meyerson, Mark J Nelson, Herbie Bradley, Arash Moradi, Amy K Hoover, and Joel Lehman. Language model crossover: Variation through few-shot prompting. arXiv preprint arXiv:2302.12170, 2023. Suvir Mirchandani, Fei Xia, Pete Florence, Brian Ichter, Danny Driess, Montserrat Gonzalez Arenas, Kanishka Rao, Dorsa Sadigh, and Andy Zeng. Large language models as general pattern machines. arXiv preprint arXiv:2307.04721, 2023. Varun Nair, Elliot Schumacher, Geoffrey Tso, and Anitha Kannan. Dera: Enhancing large language model completions with dialog-enabled resolving agents. arXiv preprint arXiv:2303.17071, 2023. MohammadReza Nazari, Afshin Oroojlooy, Lawrence Snyder, and Martin Takac. Reinforcement learning for solving the vehicle routing problem. In Advances in Neural Information Processing Systems, pp. 9861–9871, 2018.
2309.03409#83
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
83
[49] Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, and et al. BLOOM: A 176b-parameter open-access multilingual language model. CoRR, abs/2211.05100, 2022.
2309.03852#83
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
84
Theo X Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, and Armando Solar-Lezama. Demystifying gpt self-repair for code generation. arXiv preprint arXiv:2306.09896, 2023. Gurobi Optimization et al. Gurobi optimizer reference manual, 2020. Archiki Prasad, Peter Hase, Xiang Zhou, and Mohit Bansal. Grips: Gradient-free, edit-based instruction search for prompting large language models. arXiv preprint arXiv:2203.07281, 2022. Reid Pryzant, Dan Iter, Jerry Li, Yin Tat Lee, Chenguang Zhu, and Michael Zeng. Automatic prompt optimization with" gradient descent" and beam search. arXiv preprint arXiv:2305.03495, 2023. Ning Qian. On the momentum term in gradient descent learning algorithms. Neural networks, 12(1): 145–151, 1999. Guanghui Qin and Jason Eisner. Learning how to ask: Querying lms with mixtures of soft prompts. arXiv preprint arXiv:2104.06599, 2021. Colin R Reeves. Modern heuristic techniques for combinatorial problems. John Wiley & Sons, Inc., 1993.
2309.03409#84
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
84
[50] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. [51] Sheng Shen, Pete Walsh, Kurt Keutzer, Jesse Dodge, Matthew E. Peters, and Iz Beltagy. Staged training for transformer language models. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 19893–19908. PMLR, 2022. 19 Technical Report of FLM-101B REFERENCES [52] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism. CoRR, abs/1909.08053, 2019.
2309.03852#84
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
85
Colin R Reeves. Modern heuristic techniques for combinatorial problems. John Wiley & Sons, Inc., 1993. Laria Reynolds and Kyle McDonell. Prompt programming for large language models: Beyond the few-shot paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–7, 2021. Luis Miguel Rios and Nikolaos V Sahinidis. Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization, 56:1247–1293, 2013. Daniel J Rosenkrantz, Richard E Stearns, and Philip M Lewis, II. An analysis of several heuristics for the traveling salesman problem. SIAM journal on computing, 6(3):563–581, 1977. Subhro Roy and Dan Roth. arXiv:1608.01413, 2016. Solving general arithmetic word problems. arXiv preprint Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023.
2309.03409#85
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
85
[53] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research, 2023. [54] Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. CoRR, abs/2104.09864, 2021. [55] Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. ERNIE: enhanced representation through knowledge integration. CoRR, abs/1904.09223, 2019.
2309.03852#85
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
86
Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980, 2020. Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023. 24 # Large Language Models as Optimizers Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022.
2309.03409#86
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
86
[56] Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary, Xia Song, and Furu Wei. A length-extrapolatable transformer. In Anna Rogers, Jor- dan L. Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 14590–14604. Association for Computational Linguistics, 2023. [57] InternLM Team. Internlm: a multilingual language model with progressively enhanced ca- pabilities, 2023. https://github.com/InternLM/InternLM-techreport/blob/main/ InternLM.pdf,. [58] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971, 2023.
2309.03852#86
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
87
Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdh- ery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
2309.03409#87
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
87
[59] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan
2309.03852#87
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
88
Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Da Huang, Denny Zhou, et al. Larger language models do in-context learning differently. arXiv preprint arXiv:2303.03846, 2023. Yuxin Wen, Neel Jain, John Kirchenbauer, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Hard prompts made easy: Gradient-based discrete optimization for prompt tuning and discovery. arXiv preprint arXiv:2302.03668, 2023. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023. Hanwei Xu, Yujun Chen, Yulun Du, Nan Shao, Yanggang Wang, Haiyu Li, and Zhilin Yang. Gps: Genetic prompt search for efficient few-shot learning. arXiv preprint arXiv:2210.17041, 2022.
2309.03409#88
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
88
Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiao- qing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023.
2309.03852#88
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
89
Weizhe Yuan, Kyunghyun Cho, and Jason Weston. System-level natural language feedback. arXiv preprint arXiv:2306.13588, 2023. Tianjun Zhang, Xuezhi Wang, Denny Zhou, Dale Schuurmans, and Joseph E Gonzalez. Tempera: Test-time prompt editing via reinforcement learning. In The Eleventh International Conference on Learning Representations, 2023. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pp. 12697–12706. PMLR, 2021. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, et al. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625, 2022a. Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910, 2022b. 25
2309.03409#89
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
89
[60] Leslie G. Valiant. A bridging model for parallel computation. Commun. ACM, 33(8):103–111, aug 1990. [61] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelz- imer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 3261– 3275, 2019. [62] Peihao Wang, Rameswar Panda, Lucas Torroba Hennigen, Philip Greengard, Leonid Karlinsky, Rogerio Feris, David Daniel Cox, Zhangyang Wang, and Yoon Kim. Learning to grow pretrained models for efficient transformer training. In The Eleventh International Conference on Learning Representations. 20 Technical Report of FLM-101B REFERENCES
2309.03852#89
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
90
25 # Large Language Models as Optimizers A SOME FAILURE CASES Although LLMs show the power of optimizing basic math problems (Section 3) and prompts (Sec- tion 4), we see some limitations across all optimizer LLMs that may impede their power of solving more challenging problems. These limitations include: • Hallucinating the values that need to come from math calculation: The optimizer LLMs often output contents like “the function value at (5, 3) is 15” despite that the true value is not 15. The model will get it right if external tools that can reliably calculate the value are triggered. When and how to trigger such tool use cases remains an interesting topic (see e.g., (Schick et al., 2023; Cai et al., 2023)).
2309.03409#90
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
90
20 Technical Report of FLM-101B REFERENCES [63] Shuohuan Wang, Yu Sun, Yang Xiang, Zhihua Wu, Siyu Ding, Weibao Gong, Shikun Feng, Junyuan Shang, Yanbin Zhao, Chao Pang, Jiaxiang Liu, Xuyi Chen, Yuxiang Lu, Weixin Liu, Xi Wang, Yangfan Bai, Qiuliang Chen, Li Zhao, Shiyong Li, Peng Sun, Dianhai Yu, Yanjun Ma, Hao Tian, Hua Wu, Tian Wu, Wei Zeng, Ge Li, Wen Gao, and Haifeng Wang. ERNIE 3.0 titan: Exploring larger-scale knowledge enhanced pre-training for language understanding and generation. CoRR, abs/2112.12731, 2021.
2309.03852#90
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
91
• Generating solutions already appeared in context even if we tell it to "Give me a new (w, b) pair that is different from all pairs above": the optimizer LLMs do not 100% reliably follow this instruction even if its own outputs often include sentences like “I will provide a new pair that is different”, making the output self-contradictory. The output is almost guaranteed to be different from in-context old solutions when the model output contains a comparison of the new pair and all old pairs, though. Thus (implicitly) triggering such behaviors may be a solution. How to implement this feature without harming the instruction following performance of other parts remains an interesting topic to study.
2309.03409#91
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
91
[64] Yequan Wang, Xiang Li, Aixin Sun, Xuying Meng, Huaming Liao, and Jiafeng Guo. Cofenet: Context and former-label enhanced net for complicated quotation extraction. In Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, and Seung-Hoon Na, editors, Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, pages 2438–2449. International Committee on Computational Linguistics, 2022.
2309.03852#91
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
92
• In black-box math optimization, getting stuck at a point that is neither global nor local optimal: This often occurs in two linear regression cases: (a) The in-context exemplars all share the same w or b that is different from wtrue or btrue. This case is more likely to be avoided when a larger number of past solutions are included in the meta-prompt; (b) one or several of the best previous solutions in the meta-prompt have ws and bs in quantitatively opposite directions from the global optima wtrue and btrue: for example, the ws are all smaller than wtrue while the bs are all larger than btrue. Since the optimizer model often proposes to only increase w or decrease b when the past solutions in meta-prompt share w or b, the optimization will get stuck if either increasing w or decreasing b would increase the objective value. This issue is mitigated by sampling multiple new solutions (thus more exploration) at each step.
2309.03409#92
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
92
[65] Yequan Wang, Hengran Zhang, Aixin Sun, and Xuying Meng. CORT: A new baseline for comparative opinion classification by dual prompts. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 7064–7075. Association for Computational Linguistics, 2022. [66] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instruc- tions. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 13484–13508. Association for Computational Linguistics, 2023. [67] C Edward Watkins, Vicki L Campbell, Ron Nieberding, and Rebecca Hallmark. Contempo- rary practice of psychological assessment by clinical psychologists. Professional psychology: Research and practice, 26(1):54, 1995.
2309.03852#92
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
93
• Hard to navigate a bumpy loss landscape: Like other optimizers, it is harder for the optimizer LLM to optimize black-box functions when the loss landscape gets more complicated. For example, when minimizing the Rosenbrock function f (x, y) = (a−x)2+b(y−x2)2 with a = 20 (whose global optimal point is x = 20, y = 400) with 5 starting points in [10, 20] × [10, 20], the optimization often gets stuck at around (0, 0). This is because the optimizer LLM sees a decrease of objective value when it drastically decreases both x and y to 0. Then starting from (0, 0), the optimizer LLM is hard to further navigate x and y along the narrow valley in the loss landscape towards (20, 400) (Figure 13). 15000 $10000 < 50000 5 10 x 0 100 900 5 1 y 300 4o9 20 Figure 13: A visualization of the landscape of the Rosenbrock function f (x, y) = (a−x)2+b(y−x2)2 with a = 20 and b = 1. The global optima is at x = 20, y = 400 with function value 0. The function value at x = 0, y = 0 is 400. The landscape has a narrow valley between (0, 0) and (20, 400). 26
2309.03409#93
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
93
[68] Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. [69] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022, 2022. [70] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS, 2022.
2309.03852#93
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
94
26 # Large Language Models as Optimizers B PROMPTING FORMATS FOR SCORER LLM Figure 14, 15, and 16 show examples of the Q_begin, Q_end, and A_begin prompting formats when the “QA” pattern is present. The “QA” pattern is eliminated when prompting instruction-tuned scorer models like text-bison with the Q_begin and Q_end formats (Figure 17 and 18). Q: {instruction} Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers’ market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers’ market? A: Figure 14: The Q_begin prompting format on a GSM8K test exemplar with the "QA" pattern. Q: Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers’ market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers’ market? {instruction} A: Figure 15: The Q_end prompting format on a GSM8K test exemplar with the "QA" pattern.
2309.03409#94
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
94
[71] Jerry W. Wei, Le Hou, Andrew K. Lampinen, Xiangning Chen, Da Huang, Yi Tay, Xinyun Chen, Yifeng Lu, Denny Zhou, Tengyu Ma, and Quoc V. Le. Symbol tuning improves in-context learning in language models. CoRR, abs/2305.08298, 2023. [72] Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart Van Merriënboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015. [73] Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. CoRR, abs/2304.12244, 2023.
2309.03852#94
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
95
A: Figure 15: The Q_end prompting format on a GSM8K test exemplar with the "QA" pattern. Q: Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers’ market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers’ market? # A: {instruction} Figure 16: The A_begin prompting format on a GSM8K test exemplar. {instruction} Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers’ market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers’ market? Figure 17: The Q_begin prompting format on a GSM8K test exemplar without the "QA" pattern. Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers’ market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers’ market? {instruction}
2309.03409#95
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
95
[74] Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, 21 # Technical Report of FLM-101B REFERENCES He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, and Zhenzhong Lan. CLUE: A chinese language understanding evaluation benchmark. In Donia Scott, Núria Bel, and Chengqing Zong, editors, Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 4762–4772. International Committee on Computational Linguistics, 2020.
2309.03852#95
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
96
Figure 18: The Q_end prompting format on a GSM8K test exemplar without the "QA" pattern. 27 # Large Language Models as Optimizers # C META-PROMPTS C.1 META-PROMPT FOR MATH OPTIMIZATION Now you will help me minimize a function with two input variables w, b. I have some (w, b) pairs and the function values at those points. The pairs are arranged in descending order based on their function values, where lower values are better. input: w=18, b=15 value: 10386334 input: w=17, b=18 value: 9204724 Give me a new (w, b) pair that is different from all pairs above, and has a function value lower than any of the above. Do not write code. The output must end with a pair [w, b], where w and b are numerical values. Figure 19: An example of the meta-prompt for linear regression. The blue text contains solution-score pairs; the orange text are meta-instructions.
2309.03409#96
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
96
[75] Greg Yang and Edward J. Hu. Tensor programs IV: feature learning in infinite-width neural networks. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 11727–11737. PMLR, 2021. [76] Greg Yang, Edward J. Hu, Igor Babuschkin, Szymon Sidor, Xiaodong Liu, David Farhi, Nick Ryder, Jakub Pachocki, Weizhu Chen, and Jianfeng Gao. Tuning large neural networks via zero-shot hyperparameter transfer. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 17084–17097, 2021. [77] Yiqun Yao and Yequan Wang. Research without re-search: Maximal update parametrization yields accurate loss prediction across scales. CoRR, abs/2304.06875, 2023.
2309.03852#96
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
97
Figure 19: An example of the meta-prompt for linear regression. The blue text contains solution-score pairs; the orange text are meta-instructions. You are given a list of points with coordinates below: (0): (-4, 5), (1): (17, 76), (2): (-9, 0), (3): (-31, -86), (4): (53, -35), (5): (26, 91), (6): (65, -33), (7): (26, 86), (8): (-13, -70), (9): (13, 79), (10): (-73, -86), (11): (-45, 93), (12): (74, 24), (13): (67, -42), (14): (87, 51), (15): (83, 94), (16): (-7, 52), (17): (-89, 47), (18): (0, -38), (19): (61, 58). Below are some previous traces and their lengths. The traces are arranged in descending order based on their lengths, where lower values are better. <trace> 0,13,3,16,19,2,17,5,4,7,18,8,1,9,6,14,11,15,10,12 </trace> length: 2254
2309.03409#97
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
97
[78] Yiqun Yao, Zheng Zhang, Jing Li, and Yequan Wang. 2x faster language model pre-training via masked structural growth. CoRR, abs/2305.02869, 2023. [79] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? In Anna Korhonen, David R. Traum, and Lluís Màrquez, editors, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4791–4800. Association for Computational Linguistics, 2019.
2309.03852#97
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
98
<trace> 0,18,4,11,9,7,14,17,12,15,10,5,19,3,13,16,1,6,8,2 </trace> length: 2017 <trace> 0,11,4,13,6,10,8,17,12,15,3,5,19,2,1,18,14,7,16,9 </trace> length: 1953 <trace> 0,10,4,18,6,8,7,16,14,11,2,15,9,1,5,19,13,12,17,3 </trace> length: 1840 Give me a new trace that is different from all traces above, and has a length lower than any of the above. The trace should traverse all points exactly once. The trace should start with <trace> and end with </trace>. Figure 20: An example of the meta-prompt for Traveling Salesman Problems with problem size n = 20. The blue text contains solution-score pairs; the orange text are meta-instructions. 28 # Large Language Models as Optimizers C.2 META-PROMPT FOR PROMPT OPTIMIZATION
2309.03409#98
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
98
[80] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Zhiyuan Liu, Peng Zhang, Yuxiao Dong, and Jie Tang. GLM-130B: an open bilingual In The Eleventh International Conference on Learning Representations, pre-trained model. ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. [81] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona T. Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. OPT: open pre-trained transformer language models. CoRR, abs/2205.01068, 2022.
2309.03852#98
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
99
28 # Large Language Models as Optimizers C.2 META-PROMPT FOR PROMPT OPTIMIZATION Different optimizer models work the best on different styles of meta-prompts. Figure 3 in the main paper shows the meta-prompt for PaLM 2-L-IT; Figure 21 shows that for pre-trained PaLM 2-L; Figure 22 shows that for GPT models. Create a piece of text at the beginning of the answer to enhance the precision in solving diverse grade school math problems. Precision: 4 <TEXT>A dime</TEXT> Precision: 17 <TEXT>The answer is a function. It is</TEXT> Precision: 19 <TEXT>So how can we find out what this equation means?</TEXT> Precision: 20 <TEXT>Solutions:</TEXT> Figure 21: An example of the meta-prompt for prompt optimization with pre-trained PaLM 2-L on GSM8K, where the generated instruction will be prepended to the beginning of the scorer LLM output (A_begin in Section 4.1). Your task is to generate the instruction <INS>. Below are some previous instructions with their scores. The score ranges from 0 to 100. text: Let’s figure it out! score: 61 text: Let’s solve the problem. score: 63 (. . . more instructions and scores . . . ) Below are some problems.
2309.03409#99
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
99
[82] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. A survey of large language models. CoRR, abs/2303.18223, 2023. [83] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena. CoRR, abs/2306.05685, 2023.
2309.03852#99
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
100
text: Let’s solve the problem. score: 63 (. . . more instructions and scores . . . ) Below are some problems. Problem: Q: Alannah, Beatrix, and Queen are preparing for the new school year and have been given books by their parents. Alannah has 20 more books than Beatrix. Queen has 1/5 times more books than Alannah. If Beatrix has 30 books, how many books do the three have together? A: <INS> # Ground truth answer: 140 (. . . more exemplars . . . ) Generate an instruction that is different from all the instructions <INS> above, and has a higher score than all the instructions <INS> above. The instruction should begin with <INS> and end with </INS>. The instruction should be concise, effective, and generally applicable to all problems above. Figure 22: An example of the meta-prompt for prompt optimization with GPT models (gpt-3.5-turbo or gpt-4) on GSM8K, where the generated instruction will be prepended to the beginning of the scorer LLM output (A_begin in Section 4.1). The blue text contains solution- score pairs; the purple text describes the optimization task and output format; the orange text are meta-instructions. 29 # Large Language Models as Optimizers
2309.03409#100
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
100
[84] Terry Yue Zhuo, Zhuang Li, Yujin Huang, Fatemeh Shiri, Weiqing Wang, Gholamreza Haffari, and Yuan-Fang Li. On robustness of prompt-based semantic parsing with large pre-trained language model: An empirical study on codex. In Andreas Vlachos and Isabelle Augenstein, editors, Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2023, Dubrovnik, Croatia, May 2-6, 2023, pages 1090–1102. Association for Computational Linguistics, 2023. 22
2309.03852#100
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
101
29 # Large Language Models as Optimizers # D PROMPT OPTIMIZATION CURVES ON THE REMAINING BBH TASKS 80.0 Tb > beer 8 70.0 hi i | 60.0 > g a 8 50.0 wilt B | fg BH = i TT fate . understanding 40.0°5 50100 150 # steps 90.0 > aloo) Zoo | feared hy > Wty vy il Yat \ PIC Sat) ia BBH = —* pboolean_expressions 50.0°5 50 100 # steps fd + 60.05 # e causal_judgement) ~* 100 50 (a) BBH boolean_expressions (b) BBH causal_judgement (d) BBH disambiguation_qa (e) BBH dyck_languages (g) BBH geometric_shapes (h) BBH hyperbaton (j) BBH movie_recommendation (k) BBH multistep_arithmetic_two (l) BBH navigate (c) BBH date_understanding 70.0 > g | pi 4 § / Wis % 60.01 [I D> \V A I BBH 5 ~* formal_fallacies 50.0°5 20 40 60 # steps
2309.03409#101
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
102
> 8 An i ha inn 560.0) he tal ANE A 3 1 Ae ! ay oD \ |¥ eel | BBH s ~* disambiguation_ga 40.05 50 160 # steps # (f) BBH formal_fallacies a Ga os BBH logical_ a seven sh i 60 2 anit | 55 iy 50 100 150 ‘00 # steps 2500 a ail ne 8 2 < 20.0 i . ~*~ geometric_shapes 0 50 100 150 200 # steps # (i) BBH logical_deduction_seven_objects 70 hi 565 As Ah Itt \ Higa ney ¥ Nati 60 tl | | 55 —®- BBH navigate 0 40 80 120 # steps is 60 # # steps # # steps # # steps (m) BBH object_counting # (n) BBH penguins_in_a_table # (o) BBH reasoning_about_colored_objects Figure 23: Prompt optimization on 21 BBH tasks (except ruin_names and temporal_sequences already shown in Figure 6) with the text-bison scorer and the PaLM 2-L-IT optimizer, Part I. Most curves have upward trends. 30 # Large Language Models as Optimizers
2309.03409#102
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
103
30 # Large Language Models as Optimizers (a) BBH salient_translation_error_detection (b) BBH snarks (c) BBH sports_understanding (d) BBH objects_seven_objects tracking_shuffled_ (e) BBH web_of_lies (f) BBH word_sorting Figure 24: Prompt optimization on 21 BBH tasks (except ruin_names and temporal_sequences in Figure 6) with the text-bison scorer and the PaLM 2-L-IT optimizer, Part II. All curves have upward trends. E PROMPT OPTIMIZATION ON BBH TASKS – TABULATED ACCURACIES AND FOUND INSTRUCTIONS # E.1 PALM 2-L-IT AS OPTIMIZER, OPTIMIZATION STARTING FROM THE EMPTY STRING Table 8 and 9 show the instructions found by prompt optimization. A comparison of their accuracies with baselines “Let’s think step by step.” (Kojima et al., 2022), “Let’s work this out in a step by step way to be sure we have the right answer.” (Zhou et al., 2022b), and the empty string is in Table 7; a visualization is in Section 5.2 Figure 5. 31 # Large Language Models as Optimizers
2309.03409#103
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
104
31 # Large Language Models as Optimizers Table 7: Accuracies on BBH tasks: our found instructions with the PaLM 2-L-IT optimizer vs baseline. The optimization starts from the empty string. Because of the 20-80 train-test split, we show accuracies with the format “training / test / overall (training + test)”. The PaLM 2-L scores are from A_begin instructions; the text-bison scores are from Q_begin instructions. Bold numbers indicate the best for the corresponding task. empty string “” Acc
2309.03409#104
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
105
Task Scorer Our Acc “Let’s think step by step.” Acc “Let’s work this out in a step by step way to be sure we have the right answer.” Acc training / test / overall training / test / overall training / test / overall 90.0 / 83.5 / 84.8 84.8 / 58.0 / 63.1 86.0 / 84.5 / 84.8 80.0 / 69.0 / 71.2 100.0 / 100.0 / 100.0 84.0 / 64.0 / 68.4 76.0 / 57.0 / 60.8 100.0 / 96.0 / 96.8 74.0 / 57.0 / 60.4 92.0 / 90.5 / 90.8 72.0 / 55.5 / 58.8 92.0 / 75.0 / 78.4 84.0 / 86.5 / 86.0 86.2 / 71.8 / 74.7 98.0 / 85.5 / 88.0 88.0 / 88.0 / 88.0 62.0 / 67.0 / 66.0 85.7 / 83.2 / 83.7 98.0 / 88.0 / 90.0 100.0 / 100.0 / 100.0 32.0 / 16.5 / 19.6 62.0 /
2309.03409#105
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
108
Task Our Instruction boolean_expressions A Boolean expression is a well-formed expression consisting of variables, values, and logical operators. The expression must evaluate to a single True or False value. The order of precedence of the logical operators is as follows: NOT, AND, OR, XOR, IMP. Parentheses can be used to group subexpressions and to control the order of evaluation. causal_judgement When considering questions about causation, a typical person would consider the following factors: whether the action or event was a necessary condition for the outcome to occur, a sufficient condition, a proximate cause, or a foreseeable cause. date_understanding To find the date X time ago from today, first find today’s date. Then subtract X time from today’s date. If the current date is the last day of a month, then the date a month ago is the last day of the previous month. If the current date is not the last day of a month, then the date a month ago is the same day of the previous month. For example, if today is March 31, 2023, then the date a month ago is February 28, 2023. If today is April 1, 2023, then the date a month ago is March 1, 2023. disambiguation_qa
2309.03409#108
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
109
the date a month ago is February 28, 2023. If today is April 1, 2023, then the date a month ago is March 1, 2023. disambiguation_qa Identifying Antecedents of Pronouns: A Comprehensive Guide dyck_languages First, look for the opening parentheses. Then, count the number of opening parentheses. Finally, close the parentheses in the reverse order that they were opened. formal_fallacies A deductive argument is one where the conclusion follows necessarily from the premises. If the premises are true, then the conclusion must also be true. An invalid argument is one where it is possible for the premises to be true and the conclusion to be false. geometric_shapes A closed polygonal chain is a series of connected line segments. The line segments can be straight or curved. The first and last line segments are connected. The line segments do not intersect each other except at their endpoints. A closed polygon can be described by an SVG path element, which starts at a given point, goes to one or more additional points, and then ends at the starting point. The path element can consist of straight line segments, curved segments, or a mixture of both. hyperbaton The correct adjective order in English is opinion, size,
2309.03409#109
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
110
path element can consist of straight line segments, curved segments, or a mixture of both. hyperbaton The correct adjective order in English is opinion, size, shape, age, color, origin, material, and purpose. If you have more than one adjective of the same type, they are usually placed in order of importance. For example, you would say "a large, old, Pakistani ship" rather than "an old, large, Pakistani ship." There are a few exceptions to these rules, but they are generally followed in most cases. logical_deduction _seven_objects The following questions will test your ability to use deductive reasoning. You will be given a set of statements about a group of objects. You will then be asked to answer questions about the objects based on the statements. The statements in the questions are logically consistent, so you can use them to deduce the order of the objects. For each question, you must choose the option that is logically consistent with the information in the questions. movie_recommendation Based on your input, I have analyzed the given movies in terms of genre, plot, tone, audience rating, year of release, director, cast, and reviews. I have also taken into account the given options. The movie that is most similar to the given
2309.03409#110
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
111
plot, tone, audience rating, year of release, director, cast, and reviews. I have also taken into account the given options. The movie that is most similar to the given movies in terms of all these factors is: multistep_arithmetic _two The order of operations in mathematics is PEMDAS, which stands for Parentheses, Exponents, Multiplication, Division, Addition, and Subtraction. When there are multiple operations of the same precedence, they must be performed from left to right. Note that multiplication and division have the same precedence, as do addition and subtraction. navigate You will return to the starting point if and only if (1) the total number of steps you take forward is equal to the total number of steps you take back, and (2) the total number of turns you make is a multiple of 180 degrees. object_counting Here is a list of the objects you mentioned and their corresponding counts: penguins_in_a_table Here is my new text: reasoning_about _colored_objects Starting from the leftmost object in the row, I observe the following objects arranged in this order: ruin_names Which is the funniest pun on the artist or movie name? salient_translation _error_detection
2309.03409#111
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
112
Instructions: Read the German sentence and its English translation carefully, then identify the type of error in the translation and select the correct option. There are six possible types of errors: Named Entities, Numerical Values, Modifiers or Adjectives, Negation or Antonyms, Facts, and Dropped Content. # snarks Identify the sarcastic statement by considering the following factors: incongruity, exaggeration, understatement, context, speaker’s intent, and audience’s reaction. I will also consider the speaker’s tone of voice, facial expressions, and body language. # sports_understanding I will determine if a sentence about an athlete is plausible by first checking if it is grammatically correct. If it is, I will then check if it is consistent with the athlete’s sport, position, and real-world statistics. I will also check if it is consistent with the rules of the athlete’s sport. If the sentence is consistent with all of these things, I will answer "yes", otherwise I will answer "no". # temporal_sequences The answer is the time that is not mentioned in the given statements. # tracking_shuffled_objects _seven_objects Claire has the blue ball, Gertrude has the black ball, and Dave has the green ball. They are all happy with their new balls. # web_of_lies
2309.03409#112
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
113
Claire has the blue ball, Gertrude has the black ball, and Dave has the green ball. They are all happy with their new balls. # web_of_lies The answer to a question is yes if there are an odd number of liars before the current speaker, and no if there are an even number of liars before the current speaker. If the current speaker is a truth-teller, they will say the opposite of what the previous person said, while a liar will say the same thing as the previous person said. # word_sorting Alphabetical order of given words: 33 # Large Language Models as Optimizers Table 9: BBH task-wise instructions found by prompt optimization with the text-bison scorer and the PaLM 2-L-IT optimizer. The optimization starts from the empty string. # Task Our Instruction
2309.03409#113
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
114
boolean_expressions Not (not False) and not not False is False causal_judgement A typical person would likely answer the questions about causation as follows: date_understanding Today is February 28, 2023. It is a Tuesday. Yesterday was Monday, February 27, 2023. Tomorrow will be Wednesday, March 1, 2023. A week ago, it was February 21, 2023, and a month ago, it was January 28, 2023. A year from now, it will be February 28, 2024. The day of the week is important to note because it will help us to correctly answer the questions below. Not all years are leap years that contain February 29. disambiguation_qa A pronoun is a word that stands in for a noun. The noun that a pronoun refers to is called its antecedent. To identify the antecedent of a pronoun, look for the noun that the pronoun could be referring to. If there is only one possible noun, then that is the antecedent. If there are two or more possible nouns, then the antecedent is ambiguous. Use the context of the sentence to help you determine the correct antecedent. dyck_languages { } formal_fallacies How to Evaluate
2309.03409#114
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
115
ambiguous. Use the context of the sentence to help you determine the correct antecedent. dyck_languages { } formal_fallacies How to Evaluate Deductive Validity of an Argument geometric_shapes What shape is this SVG code drawing, and how many sides does it have? hyperbaton In English, adjectives are typically placed before nouns in a specific order. The order is: opinion, size, shape, age, color, origin, material, purpose, noun. For example, the sentence "the big, old, red barn" would be considered grammatically correct, while the sentence "the old, big, red barn" would not. Adjectives that come before nouns are called attributive adjectives, while adjectives that come after nouns are called predicative adjectives. logical_deduction _seven_objects In this logical reasoning task, you will be given a series of paragraphs, each of which describes a set of objects arranged in a fixed order. The statements in each paragraph are logically consistent. You must read each paragraph carefully and use the information given to determine the logical relationships between the objects. You will then be asked a question about the order of the objects. Read each question carefully and choose the
2309.03409#115
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
116
the information given to determine the logical relationships between the objects. You will then be asked a question about the order of the objects. Read each question carefully and choose the option that answers the question correctly. movie_recommendation What is the highest-rated movie similar to the given movies, with a similar IMDb rating and released in the same year? multistep_arithmetic_two Let’s solve these equations using PEMDAS order of operations. Remember that PEMDAS stands for parentheses, exponents, multiplication and division, and addition and subtraction. navigate Starting at the origin, facing north, follow the instructions. If your displacement from the origin is zero and your direction is unchanged, then your answer is Yes. Otherwise, your answer is No. object_counting Let me help you count the items you have. Just list them one by one, separated by commas. I will then count each item and tell you how many items there are in total. penguins_in_a_table This table shows information about penguins. The columns show the penguin’s name, age, height (in cm), and weight (in kg). The penguins are listed in order of their age, from youngest to oldest. reasoning_about _colored_objects
2309.03409#116
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
117
age, height (in cm), and weight (in kg). The penguins are listed in order of their age, from youngest to oldest. reasoning_about _colored_objects First, read the input carefully. Then, identify all the objects mentioned, their colors, and their positions. Next, visualize the objects and their positions in your mind. Finally, answer the questions accurately based on the information given. Make sure to pay attention to the order of the objects. ruin_names A humorous edit of an artist or movie name can be created by replacing one or more letters to form a new word or phrase that sounds similar but has a different meaning. The new word or phrase should be relevant to the original word, but it should also be a surprise, which makes the edit funny. For example, the artist or movie name "Rocky" can be changed to "Ricky," and "Schindler’s List" can be changed to "Schindler’s Lift." Be creative and have fun! salient_translation _error_detection The following translations from German to English contain a particular error. The error may be one of the following types: Named Entities, Numerical Values, Modifiers or Adjectives, Negation or Antonyms, Facts, or Dropped Content. Please identify
2309.03409#117
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
118
of the following types: Named Entities, Numerical Values, Modifiers or Adjectives, Negation or Antonyms, Facts, or Dropped Content. Please identify the error. snarks The statement sports_understanding To determine the plausibility of a sports sentence, I will first identify the sport, athletes, teams, and events mentioned in the sentence. Then, I will use my knowledge of the rules of the sport, the context of the sentence, common sense, and my knowledge of the world to determine whether the sentence is plausible. I will also consider the time period and location, as well as any other relevant information. Finally, I will return a score of 1 for plausible sentences and 0 for implausible ones. temporal_sequences To determine the time period when a person went to a place, first identify all the time periods when the person’s whereabouts are unknown. Then, rule out any time periods during which the person was seen doing something else or the place was closed. The remaining time periods are the possible times when the person could have gone to the place.
2309.03409#118
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
119
# tracking_shuffled_objects _seven_objects At the start of the game, Claire has a blue ball. Throughout the game, pairs of people swap balls. Claire ends up with the yellow ball. # web_of_lies # web_of_lies People in a group either tell the truth or lie. The truthfulness of a person’s statement is determined by the statement of the previous person. If the previous person told the truth, then the current person who says the opposite is lying. If the previous person lied, then the current person who says the opposite is telling the truth. This rule applies to all subsequent statements. # word_sorting # word_sorting Sort the following words alphabetically, ignoring case and punctuation. Print the sorted list. 34 # Large Language Models as Optimizers E.2 G P T-3.5-T U R B O AS OPTIMIZER, OPTIMIZATION STARTING FROM THE EMPTY STRING Table 11, 12 and 13 show the instructions found by prompt optimization. Their accuracies are listed in Table 10. Figure 25 visualizes the difference between their accuracies and those of the baselines “Let’s think step by step.” and the empty string. The optimizations find instructions better than the empty starting point, and most of the found instructions are better than “Let’s think step by step”. # £s
2309.03409#119
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
120
# £s One caveat in the A_begin instructions (Table 11) is that a lot of the found instructions are imperative or interrogative sentences that are more suitable to be put into “Q:” rather than “A:”, like “Solve the sequence by properly closing the parentheses.” for dyck_languages and “Which movie option from the given choices ...?” for movie_recommendation. Such styles appear more often here than the PaLM 2-L-IT optimizer results (Table 8), showing PaLM 2-L-IT understands the needed style better. In Section E.3, we show the A_begin optimization results with the non-empty starting point “Let’s solve the problem.”. Most results there are declarative sentences – more suitable for A_begin. (a) PaLM 2-L, ours minus “Let’s think step by step.” (b) PaLM 2-L, ours minus empty starting point (c) text-bison, ours minus “Let’s think step by step.” (d) text-bison, ours minus empty starting point
2309.03409#120
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
121
(c) text-bison, ours minus “Let’s think step by step.” (d) text-bison, ours minus empty starting point Figure 25: On 23 BBH tasks, the accuracy differences among instructions found by prompt opti- mization (with the gpt-3.5-turbo optimizer), “Let’s think step by step.”, and the empty string (optimization starting point). 35 # Large Language Models as Optimizers Table 10: Accuracies on BBH tasks with the gpt-3.5-turbo optimizer that starts from the empty string. The PaLM 2-L scores are from A_begin (left) instructions; the text-bison scores include Q_begin (left) and Q_end (right) instructions. Task Scorer training / test / overall training / test / overall 36 # Large Language Models as Optimizers Table 11: BBH task-wise instructions found by prompt optimization with the PaLM 2-L scorer and the gpt-3.5-turbo optimizer. The optimizations start from the empty string.
2309.03409#121
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
122
Task Our Instruction boolean_expressions An accurate evaluation of logical expressions involves correctly applying Boolean operators, considering the order of operations, and analyzing the truth values of the operands in accordance with Boolean logic principles. causal_judgement Understanding causality is critical for accurately assessing cause and effect relationships in various scenarios, leading to well-informed judgments, precise conclusions, and definitive answers to questions about the outcomes involved. date_understanding What is the specific date mentioned or required in each given problem or question, taking into account all relevant information, available options, and the provided context? Please provide the accurate answer in the format MM/DD/YYYY. disambiguation_qa Accurately analyze and clarify the pronoun-antecedent relationship in the given sentences, identifying the appropriate referent to eliminate any potential confusion or ambiguity and ensure a precise understanding of the intended meaning. dyck_languages Solve the sequence by properly closing the parentheses. formal_fallacies In determining the deductive validity of arguments based on explicit premises, a meticulous analysis of the logical relationships and implications is essential for definitively establishing
2309.03409#122
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
123
determining the deductive validity of arguments based on explicit premises, a meticulous analysis of the logical relationships and implications is essential for definitively establishing their soundness, confirming their validity or invalidity, and ensuring a reliable and robust assessment of the arguments at hand. geometric_shapes The SVG path element with the "d" attribute plays a crucial role in web development, allowing for the precise definition and rendering of various shapes on a webpage. hyperbaton Understanding the correct order of adjectives is crucial for constructing grammatically accurate and coherent sentences that effectively convey the intended meaning in diverse contexts while ensuring clarity, cohesion, and consistency throughout consistently and effortlessly. logical_deduction _seven_objects By conducting a meticulous analysis of the given information and ensuring logical consistency within each paragraph, we can accurately determine the precise order or ranking of the mentioned objects, allowing us to confidently and consistently identify the correct answer in every presented scenario with utmost precision and confidence. movie_recommendation Which movie option from the given choices closely matches the mentioned films in terms of themes, storylines, and characteristics, guaranteeing the
2309.03409#123
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
124
movie_recommendation Which movie option from the given choices closely matches the mentioned films in terms of themes, storylines, and characteristics, guaranteeing the highest possible similarity score among them all? multistep_arithmetic_two Evaluate the given mathematical expressions step by step to determine the correct solutions accurately. navigate Is it possible to determine, with absolute certainty, whether strictly adhering to the given instructions will unfailingly bring you back to the original starting point without any exceptions, errors, or deviations? object_counting Determine the total number of objects or entities mentioned in the given list, covering various categories and types, to accurately calculate the overall count. penguins_in_a_table From the given table, what information can we gather about the mentioned animals and their respective attributes, including names, ages, heights, and weights? reasoning_about _colored_objects By thoroughly examining the given information, accurately determine the answers for each question by considering the specific characteristics, colors, and positions of the mentioned objects. ruin_names Select the most amusing and clever alteration from the options provided for the given artist, movie, or title name, and
2309.03409#124
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
125
and positions of the mentioned objects. ruin_names Select the most amusing and clever alteration from the options provided for the given artist, movie, or title name, and accurately choose the correct answer to test your wit and creativity. salient_translation _error_detection Thoroughly examine the given translations from German to English and accurately identify any errors by carefully analyzing the text and selecting the appropriate option with meticulous attention to detail, precision, utmost accuracy, and comprehensive understanding of the language for precise evaluation and categorization. snarks Which option delivers the most devastatingly sarcastic response, brilliantly exposing the sheer absurdity and leaving absolutely no doubt whatsoever in all the given situations? sports_understanding Maintaining the accuracy, reliability, and integrity of sports event representation is essential for upholding the highest standards of credibility, trustworthiness, and overall quality in conveying information, without any compromise, misrepresentation, or distortion, thereby ensuring the factual accuracy of sports journalism. temporal_sequences Based on the provided timeline and observed activities, we can accurately determine the possible time range when each individual could have visited their intended destinations and
2309.03409#125
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
126
Based on the provided timeline and observed activities, we can accurately determine the possible time range when each individual could have visited their intended destinations and answer questions about their visitation time. tracking_shuffled_objects _seven_objects An important point to note is that each person in the group starts with one specific book at the beginning of the semester. web_of_lies
2309.03409#126
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
127
Analyzing the consistency and accuracy of statements provided by each person is crucial for determining the truthfulness of individuals in every scenario. # word_sorting Please sort the given words in alphabetical order: The list of words to be sorted contains 37 # Large Language Models as Optimizers Table 12: BBH task-wise Q_begin instructions found by prompt optimization with the text-bison scorer and the gpt-3.5-turbo optimizer. The optimizations start from the empty string. # Task # Our Instruction
2309.03409#127
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
128
boolean_expressions Group sub-expressions with parentheses to accurately evaluate logical operations: not, and, and finally or. Determine the resulting value as either True or False. causal_judgement Consider the intentions and actions of the individuals involved. date_understanding Determine the one-day difference in the given date and express it in the format MM/DD/YYYY. disambiguation_qa Determine the precise antecedent of the pronoun in the given sentence and select the correct option or state if it is ambiguous. dyck_languages Ensure that all opening brackets have a corresponding closing bracket, and that the closing brackets are in the correct order. formal_fallacies Thoroughly analyze the explicitly provided premises and determine the deductive validity of the argument based on all necessary conditions, implications, exclusions, and dependencies given. geometric_shapes Analyze the given SVG path element carefully and confidently select the correct option from the provided choices to accurately determine the corresponding shape. Pay close attention to the specific path details and confidently make the most suitable choice. hyperbaton Select the sentence that strictly adheres to the standard order of adjectives: opinion, size,
2309.03409#128
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
129
specific path details and confidently make the most suitable choice. hyperbaton Select the sentence that strictly adheres to the standard order of adjectives: opinion, size, age, shape, color, origin, material, and purpose. Ensure there are no deviations or alterations in the adjective order. Choose the option without any changes. logical_deduction _seven_objects Analyze the given information to accurately determine the precise order and ranking of the mentioned objects/people, considering their relationships, positions, and any provided comparisons, for a definitive and logical progression with maximum accuracy and efficiency. movie_recommendation Based on the movie list provided, carefully consider your preferences and make a well-informed decision. multistep_arithmetic_two First, simplify any expressions within parentheses following the correct order of operations to accurately evaluate the final answer with efficiency and precision. navigate Always face forward. Take 10 steps forward. Turn left. Take 5 steps forward. Take 3 steps backward. Finally, take 7 steps forward. Turn around and take 1 step forward. Repeat the previous sequence three times. Follow the given path precisely without any deviations. At the end, turn right and take 11 steps forward. If you follow these instructions, will you return to the starting
2309.03409#129
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
130
times. Follow the given path precisely without any deviations. At the end, turn right and take 11 steps forward. If you follow these instructions, will you return to the starting point? Options: - Yes - No object_counting Determine the total count of mentioned vegetables accurately and state the final count as the answer. penguins_in_a_table Analyze the given table to accurately determine the required information based on the provided criteria and attributes of the penguins and giraffes. Utilize efficient problem-solving strategies to arrive at the correct answer. reasoning_about _colored_objects ruin_names State the color of the object mentioned in the given arrangement with utmost accuracy. Choose the option that offers the most clever and humorous alteration of the given artist or movie name. Let your creativity shine and select the answer that will undoubtedly bring a smile to your face! Make sure to think outside the box! salient_translation _error_detection Analyze the translation and accurately identify the specific error type based on the source text, providing the most appropriate corresponding option. snarks Choose the option that wickedly embodies sarcasm. sports_understanding Determine the plausibility of the given statement by evaluating factual accuracy,
2309.03409#130
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
131
option. snarks Choose the option that wickedly embodies sarcasm. sports_understanding Determine the plausibility of the given statement by evaluating factual accuracy, logical consistency, and contextual relevance, then provide a succinct and well-justified response. temporal_sequences Identify the optimal time slot for the individual to engage in the mentioned location/activity considering the given sightings and waking up time, taking into account the opening and closing times of the location and the duration of each event. tracking_shuffled_objects _seven_objects Pay attention to the given information and track the swaps/exchanges carefully to accurately determine the final possession/position/outcome for the specified individual. web_of_lies To determine the truthfulness of the last person mentioned, analyze the consistency of each statement and count the number of individuals accusing the previous person of lying. If the count of accusers is even, that person tells the truth; if it is odd, that person lies. word_sorting Alphabetically sort the given list of words, ensuring all words are included and in ascending order.
2309.03409#131
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
133
Task Our Instruction boolean_expressions Accurately use order of operations and parentheses to evaluate logical expressions and determine truth values efficiently. causal_judgement Consider all relevant factors, prioritize overall well-being and ethical considerations, make well-informed decisions while foreseeing potential consequences efficiently, and consistently strive for optimal outcomes with empathy and adaptability in a thoughtful and comprehensive manner. date_understanding Subtract the specified number of days from the given date and format the outcome as MM/DD/YYYY to accurately determine the desired result in an efficient manner. disambiguation_qa Clearly identify and select the unambiguous antecedent for the pronoun or designate it as "Ambiguous" if it is unclear. dyck_languages Add the missing closing parentheses. formal_fallacies Determine the deductive validity of the argument presented based on the explicitly stated premises and reach a definitive conclusion. geometric_shapes Analyzing the given SVG path element, accurately determine its shape by closely examining its curves and coordinates, then select the correct option. hyperbaton Choose the option with the correct adjective order in each sentence,
2309.03409#133
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]