doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.03852 | 13 | The original FreeLM incorporates two training objectives: language modeling objective guided by language signals and binary classification objective guided by teacher signals. In FLM-101B, we unify the two objectives by using a masking strategy and two specialized tokens. These tokens facilitate the transformation of the binary classification objective into the unified language modeling format. The unified training objective leads to training stability when the model becomes much larger in scale. Hence, for eFLM-16B, we transform this binary classification into the format of causal (U+1F608) 3, from language modeling. Specifically, we employ two emojis: the vocabulary to replace the original binary labels of 1 and 0. We apply zero-masking to the loss for tokens in the propositions and predict one of these two special tokens at the end of each proposition. By this method, we unify the teacher objective and language modeling. Moreover, we discard the original Iterative Training approach [25] and completely mix the samples from both signals in every batch. This strategy can enhance the consistency of data sampling distribution as well as improve training stability.
# 2.3 Growth Strategy | 2309.03852#13 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 14 | 4
# Large Language Models as Optimizers
Table 2: Linear regression by optimizer LLMs: the mean ± standard deviation of the number of steps and the number of unique (w, b) pairs explored before reaching the global optima. Both w and b start from 5 random starting points in [10, 20]. We use temperature 1.0 for all models. We run each setting 5 times. The starting points are the same across optimizer LLMs but are different across 5 runs, and are grouped by: within the starting region, outside and close to the starting region, and outside and farther from the starting region. Bold numbers indicate the best among three LLMs in each setting. | 2309.03409#14 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 14 | # 2.3 Growth Strategy
The essence of the low cost in scaling FLM-101B up is the growth strategy in model training. Specifically, we train three models, with 16B, 51B, and 101B parameters respectively, in a sequential manner. Each model inherits knowledge from its predecessor. This is contrary to the common practice that the models of different sizes are trained independently [58; 59].
Function-preserving Growth. Function preservation means that before and after growth, the models yield consistent outputs given the same arbitrary inputs. This property has proven beneficial for both knowledge inheritance [8; 6; 51] and training stability [78]. The growth operators used in FLM-101B training originate from [78], with improvement. Specifically, to adapt these operators to the multi-node 3D parallel framework, we implement them by extending the model structures offline and reloading the checkpoint when the next stage starts. | 2309.03852#14 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 15 | wtrue btrue number of steps text-bison gpt-3.5-turbo gpt-4 number of unique (w, b) pairs explored text-bison gpt-3.5-turbo gpt-4 15 17 16 3 25 2 36 14 17 10 5 23 30 -1 5.8 ± 2.6 4.0 ± 1.8 3.8 ± 2.2 9.8 ± 2.8 19.6 ± 11.4 31.4 ± 6.3 35.8 ± 6.4 7.6 ± 4.5 12.6 ± 6.0 10.4 ± 5.4 10.8 ± 2.7 26.4 ± 18.3 42.8 ± 9.7 45.4 ± 16.9 4.0 ± 1.5 6.0 ± 3.7 6.2 ± 3.1 12.2 ± 2.0 12.2 ± 3.7 38.0 ± 15.9 50.4 ± 18.8 40.0 ± 12.4 33.4 ± 11.7 30.2 ± 13.4 55.8 ± 16.1 104.0 ± 52.3 126.4 ± 17.7 174.0 ± 28.2 36.0 ± 15.2 53.8 ± 16.9 42.8 ± 16.3 39.6 ± 10.1 78.6 ± 26.2 | 2309.03409#15 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 15 | Schedules and Cost-Effectiveness. Model growth scheduling is a trade-off between the pros and cons inherent to models of different sizes [78]: a smaller model is faster in computing each training step, enabling more rapid consumption of training data for broader commonsense knowledge; conversely, a larger model is better in the reduction of loss per step, indicating a deeper understanding of the nuanced linguistic patterns. We train the 16B model with 245.37B tokens, the 51B model with 39.64B tokens, and the 101B model with 26.54B tokens. The billion tokens per day of different sizes are listed in Table 1. Under this growth schedule, the total time cost for our 101B model is 21.54 days, which is 72% time-saving (or a 3.56x speedup) compared to training a 101B model from scratch (76.74 days). This is consistent with our motivations depicted in Figure 1.
# 2.4 The Parallelism Setup and Model Configurations
FLM-101B is trained on a cluster of 24 DGX-A800 GPU (8Ã80G) servers. Following the growth strategy, we sequentially complete the model training for sizes 16B, 51B, and 101B on this cluster. | 2309.03852#15 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03852 | 16 | The Parallel Strategies. Data parallelism [60] and tensor model parallelism [52] have become the standard approaches for training models at the billion scale. Nevertheless, an excessive amount of tensor parallelism may escalate GPU communication overheads, hampering training efficiency. To tackle this problem, we integrate pipeline model parallelism [35] and employ a 3D parallel strategy for optimal throughput. Moreover, by employing sequence parallelism [24], we slice the inputs to the
# 3https://apps.timwhitlock.info/emoji/tables/unicode
4
# Technical Report of FLM-101B
3 TRAINING STABILITY OF FLM-101B
Table 2: Parallel strategies and throughput for different growth stages. For NVIDIA A800 GPUs, the peak theoretical FLOPs per second is 312 teraFLOPs/sec. Gradient accumulation is applied for the large global batch size.
Params (billion) Tensor Parallel Size Pipeline Parallel Size Data Parallel Size Number Batch Size of GPUs teraFLOP/s per GPU FLOPs Utilization 16 51 101 2 4 4 1 2 4 96 24 12 192 192 192 2304 2304 2160 162 160 165 51.90% 51.30% 52.88% | 2309.03852#16 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 17 | optimization starts from 5 randomly sampled (w, b) pairs. In each step, we prompt an instruction- tuned LLM with a meta-prompt that includes the best 20 (w, b) pairs in history and their sorted objective values. The meta-prompt then asks for a new (w, b) pair that further decreases the objective value. A sample meta-prompt is shown in Figure 19 of Appendix C.1. We prompt the meta-prompt 8 times to generate at most 8 new (w, b) pairs in each step to improve optimization stability. Then we evaluate the objective value of the proposed pair and add it to history. We do black-box optimization: the analytic form does not appear in the meta-prompt text. This is because the LLM can often calculate the solution directly from the analytic form.
Table 2 summarizes the results with one of the following optimizer LLMs: text-bison, gpt-3.5-turbo, and gpt-4. We study three settings of wtrue and btrue: within the starting region [10, 20] Ã [10, 20], ânear outsideâ (each of wtrue and btrue is outside the starting region but the distance is less than 10), and âfar outsideâ (each of wtrue and btrue is outside the starting region and the distance is greater than 10). We see: | 2309.03409#17 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 17 | Transformer coreâs LayerNorm and Dropout layers along the sequence length dimension, leading to additional savings in GPU computational resources and memory utilization. We also utilize the Megetron-LM 4 implementation of the distributed optimizer [46] to further reduce the GPU memory consumption, which is a technique that evenly distributes the optimizer states across data parallel ranks.
Table 2 shows the parallelism configurations and training throughput in each stage of FLM-101B training under our growth strategy. In different stages, we configure different Tensor Parallel à Pipeline Parallel sizes to achieve higher throughput. The single-GPU throughput for all three training stages consistently exceeds 160 teraFLOPs/sec with a utilization rate of at least 51.3%. For comparison, GLM-130B achieves 135 teraFLOPs/sec [80] with a 42.27% utilization rate. We can also find that FLM-101B has a higher FLOP utilization rate than Megatron-LM [24] under a similar model size. | 2309.03852#17 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 18 | ⢠The number of unique (w, b) pairs explored by each model is fewer than exhaustive search, indicating these models are able to to do black-box optimization: compare the numbers and propose a descent direction.
⢠The text-bison and gpt-4 models outperform gpt-3.5-turbo in convergence speed: they arrive at the optima with fewer steps. The gpt-4 model also outperforms in finding the optima with fewer explored unique points. Taking a closer look at the optimization trajectory, we see gpt-4 is the best at proposing a reasonable next step from the history: for example, when the history shows the objective values of (w, b) = (8, 7), (w, b) = (8, 6), and (w, b) = (8, 5) are decreasing, it has a highest chance to propose (w, b) = (8, 4) for evaluation.
⢠The problem becomes harder for all models when the ground truth moves farther from the starting region: all models need more explorations and more steps.
3.2 TRAVELING SALESMAN PROBLEM (TSP) | 2309.03409#18 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 18 | FLM-101B Configurations. The FLM-101B model is structured with a hidden state dimension of 10, 240, a layer number of 80, a context window of 2,048 tokens, 80 attention heads, and a vocabulary size of 100, 256. FLM-101B uses the AdamW optimizer [31] with β1 = 0.9 and β2 = 0.95. A cosine learning rate schedule is employed, leading to a final learning rate of 6e â 6. We use a weight decay of 0.1 and gradient clipping of 1.0.
Table 1 presents part of the hyperparameters used in different growth stages. In each growth stage, we approximately inherit the previous learning rate and adhere to the same schedule. The learning rate at the beginning of each stage is reported in the table. In the 16B stage, 4,608k samples are used for learning rate warmup, while in later growth stages, we use fewer samples of 230.4k. Note that we do not apply batch size warmup because we address the stability issue in a different manner, detailed in Section 3.
The training duration and token consumption for each stage are also outlined in Table 1. In total, FLM-101B training is accomplished within 22 days using 311.54B tokens.
# 3 Training Stability of FLM-101B | 2309.03852#18 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 19 | 3.2 TRAVELING SALESMAN PROBLEM (TSP)
Next, we consider the Traveling Salesman Problem (TSP) (Jünger et al., 1995; Gutin & Punnen, 2006), a classical combinatorial optimization problem with numerous algorithms proposed in literature, including heuristic algorithms and solvers (Rosenkrantz et al., 1977; Golden et al., 1980; Optimization et al., 2020; Applegate et al., 2006; Helsgaun, 2017), and approaches based on training deep neural networks (Kool et al., 2019; Deudon et al., 2018; Chen & Tian, 2019; Nazari et al., 2018). Specifically, given a set of n nodes with their coordinates, the TSP task is to find the shortest route that traverses all nodes from the starting node and finally returns to the starting node.
Our optimization process with LLMs starts from 5 randomly generated solutions, and each optimiza- tion step produces at most 8 new solutions. We present the meta-prompt in Figure 20 of Appendix C.1. We generate the problem instances by sampling n nodes with both x and y coordinates in [â100, 100]. We use the Gurobi solver (Optimization et al., 2020) to construct the oracle solutions and compute the optimality gap for all approaches, where the optimality gap is defined as the difference between the
5 | 2309.03409#19 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 19 | # 3 Training Stability of FLM-101B
Models beyond 100B parameters [49; 80] usually suffer from a bunch of notorious stability issues including loss divergence, gradient explosion, and numerical overflow/underflow. This not only inflates the cost of searching for feasible hyperparameters like optimal learning rates, but also intensifies ongoing maintenance during training, such as babysitting, issue resolution, data adjustment, and rebooting. Moreover, this makes the budget of the whole project unpredictable. We have undertaken the following efforts to mitigate these issues.
Loss Prediction. The Tensor Programs theories [75; 28] unveil the universal relations across the training dynamics of a series of models with the model width tending to infinite. For certain classes of hyperparameters, this results in a parameterized mapping for their optimal value between a small model and its larger counterparts, which is termed µP [76]. Two important insights are:
⢠The wider, the better: theoretically, under µP transfer, a wider model will always yield lower loss than its narrower counterparts when exposed to identical data [76]. As a direct corollary, if a narrow model converges, its wider counterparts will always converge.
# 4https://github.com/NVIDIA/Megatron-LM
5
# Technical Report of FLM-101B
4 BENCHMARK EVALUATION
16B Stage | 51BStage | 101B Stage Training Loss Processed Tokens (Billions) | 2309.03852#19 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 20 | 5
# Large Language Models as Optimizers
Table 3: Results of the Traveling Salesman Problem (TSP) with different number of nodes n, where each n contains 5 problems. â# stepsâ calculates the mean ± standard error of optimization steps for successful runs that find the optimal solution. â# successesâ counts the number of problems that OPRO results in the optimal solution. When no optimal solution is found for any evaluated problem, the corresponding number of steps is N/A. | 2309.03409#20 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 20 | 5
# Technical Report of FLM-101B
4 BENCHMARK EVALUATION
16B Stage | 51BStage | 101B Stage Training Loss Processed Tokens (Billions)
Figure 2: Training loss for FLM-101B models.
⢠Loss prediction: the loss value of a large model is predictable using the loss of its smaller counterparts, as claimed in GPT-4 technical report [36]. For the first time in the open-source world, µScaling [77] provides evidence that loss prediction can be achieved by combining µP [76] and (a modified) scaling law [23; 18; 19]. | 2309.03852#20 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 21 | n optimality gap (%) # steps (# successes) NN FI text-bison gpt-3.5-turbo gpt-4 text-bison gpt-3.5-turbo gpt-4 10 15 20 50 13.0 ± 1.3 9.4 ± 3.7 16.0± 3.9 19.7 ± 3.1 3.2 ± 1.4 1.2 ± 0.6 0.2± 0.1 9.8 ± 1.5 0.0 ± 0.0 4.4 ± 1.3 30.4 ± 10.6 219.8 ± 13.7 0.0 ± 0.0 1.2 ± 1.1 4.4 ± 2.5 133.0 ± 6.8 0.0 ± 0.0 0.2 ± 0.2 1.4 ± 0.6 11.0 ± 2.6 40.4 ± 5.6 (5) N/A (0) N/A (0) N/A (0) 46.8 ± 9.3 (5) 202.0 ± 41.1 (4) 438.0 ± 0.0 (1) N/A (0) 9.6 ± 3.0 (5) 58.5 ± 29.0 (4) 195.5 ± 127.6 (2) N/A (0) | 2309.03409#21 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 21 | Based on these findings, our method to solve training stability is as follows: we first determine the data distribution before the FLM-16B training starts. Next, we perform a grid search on three hyperparameters including the learning rate, initialization standard deviation, and the softmax tem- perature in the output layer. This grid search is performed by running a proxy model (less than 100M ) with a hidden state dimension (âmodel widthâ) of 256 and a head number of 2. All the other structural hyperparameters and training data of the proxy model are identical to those of FLM-16B. A single run of grid search takes 24.6 hours with data parallelism on 6 nodes, which is equivalent to 6 hours per run given our 24-node infrastructure. Finally, We find a group of well-performing hyperparameters: learning rate = 4e â 4, standard deviation = 1.6e â 2, and softmax temperature = 2.0, through this grid search. Transferring these hyperparameters to the 16B model via µP [76] led to a seamless training experience devoid of instabilities. Combining with MSG [78], we also witness no post-growth divergence in FLM-51B and FLM-101B. | 2309.03852#21 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 22 | distance in the solution constructed by the evaluated approach and the distance achieved by the oracle solution, divided by the distance of the oracle solution. Besides evaluating OPRO with different LLMs including text-bison, gpt-3.5-turbo and gpt-4, we also compare OPRO to the following heuristics:
⢠Nearest Neighbor (NN). Starting from an initial node, the solution is constructed with the nearest neighbor heuristic: At each step, among the remaining nodes that are not included in the current partial solution, NN selects the node with the shortest distance to the end node of the partial solution, and adds it as the new end node. The process finishes when all nodes have been added to the solution. | 2309.03409#22 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 22 | The full training loss curve is presented in Figure 2. The first stage (16B) stably goes through 246B tokens. Immediately afterwards, FLM grows from 16B to 51B. As expected, the training is stable. More importantly, we observe that the loss curve becomes steeper. It matches the intuition that a larger model is better in loss reduction per step. Subsequently, FLM grows to 101B. Although the training data for the 51B stage are only 40B tokens, the 101B training remains stable, and the loss curve becomes slightly steeper again. This loss curve proves the effectiveness of the growth strategy.
Our implementations of µP are largely consistent with those in µScaling [77], with modifications to handle the rotary embedding. Thus, the intermediate loss ranges for FLM-16B are also predictable with the results from multiple proxy widths at the same steps. | 2309.03852#22 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 23 | ⢠Farthest Insertion (FI). One caveat of the nearest neighbor heuristic is that it does not take the distance between the start and end node into consideration when constructing partial solutions. To address this issue, FI aims to optimize the cost of inserting new nodes into the partial solution at each step. Define the minimal insertion cost of adding a new node k as c(k) = min(i,j) d(i, k) + d(k, j) â d(i, j), where i and j are adjacent nodes in the current tour, and d(·, ·) represents the distance between two nodes. At each step, FI adds a new node that maximizes the minimal insertion cost. | 2309.03409#23 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 23 | Mixed Precision with Bfloat16. We apply mixed-precision training to save run-time memory and reduce time costs. Specifically, we choose Bfloat16 instead of FP16 due to its superior precision for values approaching zero, making it more suitable for µP. As a result, we do not encounter the FP16 underflow issue reported by [76]. To our knowledge, the FLM models are currently the largest ones successfully trained with mixed precision + µP. Moreover, Bfloat16 negates the need for loss scale adjustments, making our training procedure more promising and reproducible.
# 4 Benchmark Evaluation
Many existing benchmarks (e.g., Open LLM) focus on assessing the knowledgeability of LLMs. In this section, we discuss the results of FLM on these benchmarks. We argue that knowledge alone might not comprehensively reflect LLMâs capability (see Section 4.2 for more details). Thus, in addition to the common benchmark evaluation, we borrow the concept of IQ tests and evaluate LLMs with some specific tasks in Section 5.
Cost Estimation Method. Due to the considerable computational expense of LLMs, we also emphasize their associated costs in our experimental results. However, it is hard to directly compare
6
# Technical Report of FLM-101B
4 BENCHMARK EVALUATION | 2309.03852#23 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 24 | We present the results in Table 3. We randomly generate 5 problem instances for each number of nodes n. In addition to measuring the optimality gap, on problems where the LLM finds the optimal solutions, we also show the number of optimization steps taken to reach the global optimum. First, we observe that gpt-4 significantly outperforms gpt-3.5-turbo and text-bison across all problem sizes. Specifically, on smaller-scale problems, gpt-4 reaches the global optimum about 4Ã faster than other LLMs. On larger-scale problems, especially with n = 50, gpt-4 still finds solutions with a comparable quality to heuristic algorithms, while both text-bison and gpt-3.5-turbo get stuck at local optima with up to 20Ã worse optimality gaps.
On the other hand, the performance of OPRO degrades dramatically on problems with larger sizes. When n = 10, all LLMs find the optimal solutions for every evaluated problem; as the problem size gets larger, the OPRO optimality gaps increase quickly, and the farthest insertion heuristic starts to outperform all LLMs in the optimality gap. | 2309.03409#24 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 24 | 6
# Technical Report of FLM-101B
4 BENCHMARK EVALUATION
the actual cost of LLMs due to their different infrastructures, and the different costs incurred on different hardware. To objectively compare training costs, we use the number of floating-point operations for training as the cost estimation index, which can be estimated from the modelâs hyperparameters, configuration, and training data [35]. Since many models do not release the complete training configuration (e.g., GPT-3, LLAMA series), we estimate FLOPs within a range5. | 2309.03852#24 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 25 | Limitations. We would like to note that OPRO is designed for neither outperforming the state- of-the-art gradient-based optimization algorithms for continuous mathematical optimization, nor surpassing the performance of specialized solvers for classical combinatorial optimization problems such as TSP. Instead, the goal is to demonstrate that LLMs are able to optimize different kinds of objective functions simply through prompting, and reach the global optimum for some small- scale problems. Our evaluation reveals several limitations of OPRO for mathematical optimization. Specifically, the length limit of the LLM context window makes it hard to fit large-scale optimization problem descriptions in the prompt, e.g., linear regression with high-dimensional data, and traveling salesman problems with a large set of nodes to visit. In addition, the optimization landscape of some objective functions are too bumpy for the LLM to propose a correct descending direction, causing the optimization to get stuck halfway. We further elaborate our observed failure cases in Appendix A.
6
# Large Language Models as Optimizers
I have some texts along with their corresponding scores. The texts are arranged in ascending order based on their scores, where higher scores indicate better quality.
text: Letâs figure it out! score: 61
text: Letâs solve the problem. score: 63
(. . . more instructions and scores . . . ) | 2309.03409#25 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 25 | For monolingual LLMs, e.g., GPT-3, the cost from monolingual data is equal to the total cost. The computational cost of GPT-3 is calculated as 376.41 (±53.77) zettaFLOPs, and LLAMA-2 (13B) as 210.37 (±28.77) zettaFLOPs. Because the cost is linear to both model parameters and training data [19], we could calculate the cost of the remaining LLAMA models easily. For bilingual or multilingual models, it is necessary to estimate based on the amount of data in the corresponding language. The total cost of GLM-130B is 421.60 zettaFLOPs. We know that the data ratio of English and Chinese is 1:1. Hence, the cost of GLM-130B for English is 210.80 zettaFLOPs, and the same for Chinese. The data ratio of FLM-101B is 53.5% : 46.5% for English and Chinese. The total cost of FLM-101B is 52.76 zettaFLOPs. According to the data ratio, the cost for English and Chinese is 28.22 zettaFLOPs and 24.54 zettaFLOPs, respectively.
# 4.1 Open LLM Evaluation | 2309.03852#25 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 26 | text: Letâs figure it out! score: 61
text: Letâs solve the problem. score: 63
(. . . more instructions and scores . . . )
The following exemplars show how to apply your text: you replace <INS> in each input with your text, then read the input and give an output. We say your output is wrong if your output is different from the given output, and we say your output is correct if they are the same.
input: Q: Alannah, Beatrix, and Queen are preparing for the new school year and have been given books by their parents. Alannah has 20 more books than Beatrix. Queen has 1/5 times more books than Alannah. If Beatrix has 30 books, how many books do the three have together? A: <INS> output: 140
(. . . more exemplars . . . )
Write your new text that is different from the old ones and has a score as high as possible. Write the text in square brackets. | 2309.03409#26 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 26 | # 4.1 Open LLM Evaluation
Open LLM is an open-source project 6. Its target is to track and evaluate the open-sourced LLMs and chatbots. Open LLM contains four tasks: ARC-Challenge (ARC for short), HellaSwag, MMLU, and TruthfulQA. The Open LLM Leaderboard applies the average score of these tasks as a metric.
ARC: The ARC [9] dataset is proposed for graduate-school level closed book science question- answering tasks. Most problems in ARC are solvable with life experiences and Wikipedia searches. Thus, a model is expected to perform better if exposed to more commonsense and factual data.
HellaSwag: This is a sentence completion task emphasizing on commonsense inference [79]. We observe that the increase in HellaSwag performance is highly correlated with the reduction of training loss. This is intuitive because the training data is usually enriched with common sense.
MMLU: MMLU includes 57 multiple-choice tasks covering subjects spanning STEM to social science [17]. The tasks differ significantly in complexity, with many STEM-oriented questions demanding domain-specific professional knowledge and intricate reasoning to be solved. | 2309.03852#26 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 27 | (. . . more exemplars . . . )
Write your new text that is different from the old ones and has a score as high as possible. Write the text in square brackets.
Figure 3: An example of the meta-prompt for prompt optimization with instruction-tuned PaLM 2-L (PaLM 2-L-IT) on GSM8K, where the generated instruction will be prepended to the beginning of âA:â in the scorer LLM output (A_begin in Section 4.1). <INS> denotes the position where the generated instruction will be added. The blue text contains solution-score pairs; the purple text describes the optimization task and output format; the orange text are meta-instructions.
# 4 APPLICATION: PROMPT OPTIMIZATION
Next, we demonstrate the effectiveness of OPRO on prompt optimization, where the objective is to find the prompt that maximizes task accuracy. We first introduce the problem setup, then illustrate the meta-prompt design.
4.1 PROBLEM SETUP | 2309.03409#27 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 27 | TruthfulQA: TruthfulQA contains 817 factual questions to detect model falsehoods caused by naively mimicking human language patterns [27]. The solutions to these questions are closely associated with English Wikipedia sources. The task probes a modelâs factual knowledge and resistance to popular misconceptions.
Table 3: Performance of FLM-101B and baselines including LLAMA series and GLM-130B. In order to visually compare the performance and cost, we estimate the floating-point opera- tions (zetta = 1021) of the training process. | 2309.03852#27 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 28 | 4.1 PROBLEM SETUP
We focus on prompt optimization for natural language tasks, where both the input and output are in the text format. The task is represented as a dataset with training and test splits, where the training set is used to calculate the training accuracy as the objective value during the optimization process, and we compute the test accuracy on the test set after the optimization finishes. While traditional optimization often requires a decently large training set, our experiment shows that a small number or fraction of training samples (e.g., 3.5% of the training set for GSM8K (Cobbe et al., 2021), 20% for Big-Bench Hard (Suzgun et al., 2022)) is sufficient. The objective function evaluator is an LLM to which the optimized prompt will be applied, and it can be the same or different from the LLM for optimization. We denote the LLM for objective function evaluation as the scorer LLM, and the LLM for optimization as the optimizer LLM.
7
# Large Language Models as Optimizers
The output of the optimizer LLM is an instruction, which is concatenated to the question part of every exemplar and prompts the scorer LLM. We consider the following positions to insert the instruction: | 2309.03409#28 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 28 | Model Cost (zettaFLOPs) Average ARC HellaSwag MMLU TruthfulQA LLAMA-2 (13B) LLAMA-2 (7B) LLAMA (13B) LLAMA (7B) GLM-130B 201.37 106.60 94.81 49.54 210.80 (±28.77) (±15.23) (±13.54) (±7.08) 58.66 54.32 56.08 49.72 48.11 28.22 43.94 59.39 53.07 56.23 51.02 42.15 39.76 82.13 78.59 80.93 77.82 67.91 66.23 55.77 46.87 47.67 35.71 42.59 28.30â 37.38 38.76 39.48 34.33 39.80 41.47
# FLM-101B â44.50 for a knowledge-enhanced eFLM-16B (Section 2.2, 4.2). | 2309.03852#28 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 29 | Q_begin: the instruction is added before the original question. ⢠Q_end: the instruction is added after the original question. ⢠A_begin: the instruction is added to the beginning of the scorer LLM output. This is applicable to pretrained LLMs without instruction tuning, where the prompt is formatted as a sequence of QA pairs.
We exemplify these prompting formats in Appendix B.
4.2 META-PROMPT DESIGN
Figure 3 shows an example of the meta-prompt for prompt optimization on GSM8K (Cobbe et al., 2021). More details are as follows.
Optimization problem examples. The problem description includes a few examples taken from the training set to demonstrate the task for the generated instructions. For example, from the input-output pair in Figure 3, we can infer this is a math word problem. The input-output pair also demonstrates the position where the generated instruction will be added to, and this is essential for the optimizer LLM to generate instructions of the same style. In each optimization step, we add several (three for example) training examples to the meta-prompt by random sampling the training set or choose the ones the previous instructions fall short of. | 2309.03409#29 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 29 | # FLM-101B â44.50 for a knowledge-enhanced eFLM-16B (Section 2.2, 4.2).
Table 3 details the performance of FLM-101B and strong baselines, including LLAMA series and GLM-130B. Because GPT-3 is closed-source, we could not get the probability values for a fair comparison. As a result, we cannot list GPT-3 here. GLM-130B results are achieved by our run on an open-sourced checkpoint.
5This range originates from the use of checkpoint activation. Please check [35] for more details. 6https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
7
# Technical Report of FLM-101B
4 BENCHMARK EVALUATION
Results. Among all the baseline models, FLM-101B ranks last with an average of 43.94. However, going deeper into the nature of these tasks, this does not necessarily indicate the inferiority of our model and training procedures. | 2309.03852#29 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 30 | Optimization trajectory. The optimization trajectory includes instructions generated from the past optimization steps, along with their scores. The old instructions and scores are sorted by the score in ascending order. The score is the training accuracy in prompt optimization. We only keep instructions with the highest scores in the meta-prompt in consideration of the LLM context length limit.
Meta-instructions. We also add meta-instructions: the instructions to the optimizer LLM that explain the optimization goal and instruct the model how to use the above information. The meta-instructions may also specify the desired generated instruction format for easier parsing.
# 5 PROMPT OPTIMIZATION EXPERIMENTS
We present the evaluation results for prompt optimization in this section. Our experiments demonstrate that OPRO brings a significant performance gain across the board, with different combinations of LLMs as the optimizer and the scorer.
5.1 EVALUATION SETUP
Models. The LLMs we use as the optimizer and the scorer are:
⢠Optimizer LLM: Pre-trained PaLM 2-L (Anil et al., 2023), instruction-tuned PaLM 2-L (denoted PaLM 2-L-IT), text-bison, gpt-3.5-turbo, and gpt-4.
Scorer LLM: Pre-trained PaLM 2-L and text-bison. | 2309.03409#30 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 30 | (i) MMLU typically requires domain knowledge to solve. In our training of FLM-101B, no English textbook or sample exam questions are intentionally used. Nevertheless, in an FLM variant that incorporates this knowledge with FreeLM objectives (eFLM-16B, Section 2.2), even a 16B FLM model can outperform GLM-130B, supporting our claims here.
(ii) As aforementioned, TruthfulQA, ARC, and HellaSwag emphasize more on common sense and Wiki-level knowledge, and their performances improve with the increased amount of data and the reduction of training loss. With less than 0.16T English data (about one-tenth of LLAMA-2), FLM-101B already achieves the best accuracy of 41.47 among all the baselines on TruthfulQA. On ARC and HellaSwag, FLM-101B is comparable to GLM-130B with a similar amount of English data (approximately 0.2T). Also, the training data of GLM-130B includes ARC and HellaSwag, as expressly claimed in [80]. In our understanding, superior performance of FLM-101B can be expected on these three tasks if exposed to more training data.
# 4.2 Evaluation on the Professional Knowledge-Enhanced Version | 2309.03852#30 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 31 | Scorer LLM: Pre-trained PaLM 2-L and text-bison.
With pre-trained PaLM 2-L as the scorer, the optimizer LLM generates A_begin instructions. Since text-bison has been instruction-tuned, the optimizer LLM generates Q_begin and Q_end instructions when text-bison is used as the scorer.
Benchmarks. Our primary evaluation benchmarks are GSM8K (Cobbe et al., 2021) and Big-Bench Hard (BBH) (Suzgun et al., 2022). GSM8K is a benchmark of grade school math word problems with 7,473 training samples and 1,319 test samples, where chain-of-thought prompting (Wei et al., 2022) and the zero-shot instruction âLetâs think step by step.â (Kojima et al., 2022) have drastically improved the performance over the standard prompting. BBH is a suite of 23 challenging BIG-Bench tasks (Srivastava et al., 2022) that covers a wide range of topics beyond arithmetic reasoning, including symbolic manipulation and commonsense reasoning. Each task contains up to 250 examples in total.
8
# Large Language Models as Optimizers | 2309.03409#31 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 31 | # 4.2 Evaluation on the Professional Knowledge-Enhanced Version
We have also conducted experiments on a knowledge-enhanced version (eFLM-16B, detailed in Section 2.2) of the FLM to validate the effect of using domain-specific knowledge data. To reduce the training cost, we continue to train the smallest FLM-16B with teacher signals from a combination of (i) part of the auxiliary training data of MMLU [17], (ii) exam questions in similar domains and formats to C-Eval [20] 7, and (iii) other domain knowledge data. Note that, eFLM-16B is not a typical fine-tuning with additional data, which may affect the language capability of LLM. Recall that the FLM series uses FreeLM as its backbone which can learn both language and teacher signals. In this training, we preserve the language signal. Table 4 lists the result of eFLM-16B and baselines on C-Eval.
Table 4: Performance of eFLM-16B and baselines on C-eval. In this table, eFLM-16B refers to the professional-knowledge-enhanced FLM-16B. Note that C-Eval leaderboard only keeps one decimal place for the evaluation results. | 2309.03852#31 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 32 | 8
# Large Language Models as Optimizers
To examine the transferability of the optimized instructions, we also evaluate the instructions op- timized for GSM8K on two other mathematical reasoning datasets, i.e., MultiArith (Roy & Roth, 2016) and AQuA (Ling et al., 2017).
Implementation details. We set the temperature to be 0 when evaluating the performance of generated instructions, in which case the scorer LLM greedily decodes. Unless otherwise specified, we set the default temperature to be 1.0 for optimizer LLMs to generate diverse and creative instructions. At each optimization step, we prompt the optimizer LLM with the meta-prompt 8 times to generate 8 instructions, then we add these instructions with their training scores to the optimization trajectory in the meta-prompt. Our meta-prompt at each step contains the best 20 instructions so far and 3 randomly picked exemplars from the training set. We study the effect of different hyperparameters in ablation studies (Section 5.3). Appendix C.2 presents the full meta-prompts for different optimizer LLMs.
5.2 MAIN RESULTS
We show prompt optimization curves on GSM8K and two BBH tasks in this section. The curves on other BBH tasks are deferred to Appendix D, and the tables containing all accuracy numbers are in Appendix E.
# 5.2.1 GSM8K | 2309.03409#32 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03409 | 33 | # 5.2.1 GSM8K
For prompt optimization, we randomly sample 3.5% examples from the GSM8K training set. The same subset is used throughout optimization, so that the task accuracies computed at intermediate optimization steps are approximations of the training accuracy on all 7,473 training examples. This balances the evaluation cost with the generalization performance. After the optimization procedure finishes, we evaluate the found instructions on the entire GSM8K test set.
Figure 1(a) in Section 1 shows prompt optimization curves with pre-trained PaLM 2-L as scorer and PaLM 2-L-IT as optimizer, and the initial instruction is âLetâs solve the problemâ with a (approximated, and same below) training accuracy of 60.5. We observe that the optimization curve shows an overall upward trend with several leaps throughout the optimization process, for example:
⢠âLetâs think carefully about the problem and solve it together.â at Step 2 with the training accuracy 63.2;
âLetâs break it down!â at Step 4 with training accuracy 71.3; ⢠âLetâs calculate our way to the solution!â at Step 5 with training accuracy 73.9; ⢠âLetâs do the math!â at Step 6 with training accuracy 78.2. | 2309.03409#33 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 33 | Results. Enhanced with professional knowledge, significant improvements are observed. On MMLU task, the incorporation of the teacher signals with professional knowledge data results in a score of 44.50 for eFLM-16B (see Table 3), which surpasses GLM-130B (42.59), a model that also uses multi-task data in the related domain [80]. As a comparison, the MMLU score is 27.02 for the un- enhanced FLM-16B. On C-Eval tasks 8, we observe that eFLM-16B performs better than GLM-130B by about 2 points. As a comparison, the average C-Eval score of the vanilla FLM-16B is 27.0, which underperforms GLM-130B. These results suggest that evaluation with professional knowledge may not fully reflect the capability of LLMs, particularly when different LLMs are trained with different data collections, and some may not come with a clear list.
# 4.3 Evaluation of the Growth Strategy | 2309.03852#33 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 34 | The optimization curves also generally show a decrease of the variance among the accuracies of instructions generated at each step, indicating that the optimizer LLM generates distributionally better instructions throughout the optimization.
Next, we present the results of generating Q_begin instructions with the text-bison scorer and the PaLM 2-L-IT optimizer, starting from an empty instruction with a 57.1 training accuracy. The optimization curve in Figure 4(a) shows a similar upward trend, during which a few leaps in the training accuracy include:
⢠âSolve the following problems using the given information.â at Step 2 with training accuracy 59.8;
⢠âSolve the following problems by applying the given information and using the appropriate mathematical operations.â at Step 3 with training accuracy 64.0;
⢠âLetâs read the problem carefully and identify the given information. Then, we can create an equation and solve for the unknown variable.â at Step 4 with training accuracy 67.0; | 2309.03409#34 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 34 | # 4.3 Evaluation of the Growth Strategy
Our core method for reducing computational cost is the growth strategy. We would like to answer the question of whether our growth strategy is effective in knowledge inheritance, and the trajectory of how model capabilities grow with size. Hence, we evaluate the performance of FLM on all the stages: 16B, 51B, and 101B. The training data for each stage is 0.245T, 0.04T, and 0.027T, respectively, in
7C-Eval can be considered as a Chinese version of MMLU. 8The scores are achieved on the test set by submitting to the C-Eval platform.
8
# Technical Report of FLM-101B
5 EVALUATIONS INSPIRED BY IQ TESTS
an accumulative manner according to the growth setting. Table 5 shows the performance of FLM models at each stage.
Table 5: Performance of the three stages of FLM on Open LLM. To reduce the computational cost during evaluation, we sample 20% and 30% items for HellaSwag and MMLU tasks, respectively. Parameters Training Data Average ARC Hellaswag MMLU TruthfulQA | 2309.03852#34 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 35 | ⢠âIâm always down for solving a math word problem together. Just give me a moment to read and understand the problem. Then, Iâll create an equation that models the problem, which Iâll solve for the unknown variable. I also may or may not use some helpful diagrams or visuals to understand the problem. Lastly, be sure to allow me some time to carefully check my work before submitting any responses!â at Step 29 with training accuracy 70.1.
9
# Large Language Models as Optimizers
Table 4: Test accuracies on GSM8K. We show the instruction with the highest test accuracy for each scorer-optimizer pair. | 2309.03409#35 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 35 | 16B 51B 101B 245.37B 39.64B 26.54B 39.19 41.79 44.41 32.25 35.32 39.76 58.57 64.04 67.88 27.02 27.66 28.54 38.92 40.12 41.47
Results. As expected, the performance of FLM improves with the increase in model size. FLM-101B achieves the best performance on almost all tasks. This means that our model inherits knowledge from the previous stage after each growth. We also observe that the 101B model improves the performance scores more significantly than the 51B model, with less data. This indicates that the models are successfully incorporating new weights in training after growth, and taking advantage of larger model sizes when the loss is low. Interestingly, the performance on ARC and HellaSwag increases steadily and significantly. This corresponds exactly to the steady decline of the model loss. Again, as we claimed in Section 4.1, when more training data is processed, FLMâs performance on Open LLM becomes better.
The above experiments evaluate the knowledge-related ability of FLM and how the performances depend on the amount and domain of training data. We also conduct an additional range of evaluations inspired by IQ tests in the following section.
# 5 Evaluations Inspired by IQ Tests | 2309.03852#35 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 36 | Baselines PaLM 2-L PaLM 2-L PaLM 2-L (Kojima et al., 2022) (Zhou et al., 2022b) A_begin A_begin A_begin Letâs think step by step. Letâs work this out in a step by step way to be sure we have the right answer. Letâs solve the problem. PaLM 2-L A_begin (empty string) text-bison text-bison text-bison (Kojima et al., 2022) (Zhou et al., 2022b) Q_begin Q_begin Q_begin Letâs think step by step. Letâs work this out in a step by step way to be sure we have the right answer. Letâs solve the problem. text-bison Q_begin (empty string) Ours PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L text-bison PaLM 2-L-IT PaLM 2-L A_begin A_begin gpt-3.5-turbo A_begin gpt-4 A_begin PaLM 2-L-IT Q_begin Take a deep breath and work on this problem step-by-step. Break this down. A little bit of arithmetic and a logical approach will help us | 2309.03409#36 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 36 | # 5 Evaluations Inspired by IQ Tests
Section 4 details the evaluation of existing benchmarks, focusing on knowledge. As we discussed in Section 1, knowledge could not fully reflect the Intelligence Quotient (IQ) of LLMs. To this end, we use existing IQ-related datasets [71; 72; 53] and make necessary modifications or generate new synthetic datasets where necessary.
Specifically, the IQ test mainly considers four aspects: symbolic mapping, rule understanding, pattern mining, and anti-interference. A common key property of these tasks is that they are dependent on the inference and generalization in a new context, instead of the previously-learned knowledge. We re-organize the modified existing datasets and our newly generated datasets under these four aspects, and introduce the motivation for each aspect, as well as the detailed execution methods. | 2309.03852#36 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 37 | Q_begin Take a deep breath and work on this problem step-by-step. Break this down. A little bit of arithmetic and a logical approach will help us quickly arrive at the solution to this problem. Letâs combine our numerical command and clear thinking to quickly and accurately decipher the answer. Letâs work together to solve math word problems! First, we will read and discuss the problem together to make sure we understand it. Then, we will work together to find the solution. I will give you hints and help you work through the problem if you get stuck. Letâs work through this problem step-by-step: text-bison text-bison text-bison text-bison Q_end gpt-3.5-turbo Q_end gpt-4 Q_begin 71.8 58.8 60.8 34.0 64.4 65.6 59.1 56.8 80.2 79.9 78.5 74.5 64.4 68.5 66.5 62.7 | 2309.03409#37 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 37 | Compared Methods. Borrowing psychological ideas that the measurement of IQ is dependent on age 9, we mainly consider models trained with similar amounts of data to FLM-101B. As a milestone of LLM development, GPT-3 (175B) [3] proposed in-context learning for the first time. GLM-130B [80] is the first open English-Chinese bilingual LLM. Hence, we select them as baseline models. Both models are trained with 300 ~400 billion tokens, which are in the same range as ours. GPT-3 focuses on English, so it is not included in the Chinese-related evaluation (i.e., CLUE-IQ).
# 5.1 Symbolic Mapping Evaluation | 2309.03852#37 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 38 | Note that although our default setting is to run OPRO for 200 steps in prompt optimization, we need much fewer steps if the goal is to find some outstanding instructions. An example is that the Figure 1(a) experiment found âLetâs do the math!â at Step 6 with training accuracy 78.2, almost matching the âTake a deep breath and work on this problem step-by-step.â found at the 107th step with training accuracy 80.2, at a point where the optimization curve is still trending upwards. This is because a leap in our optimization curve does not always correspond to a much better instruction being discovered; instead, it can be due to a large qualitative improvement of all 8 generated instructions in this step. The latter usually happens several steps after the former: after a much better instruction is discovered in one step, the meta-prompt gradually gets rid of worse instructions in the latter steps by generating instructions similar to the much-better one. The top instructions kept in the meta-prompt gradually improves in this procedure. At a point when the meta-prompt only triggers higher quality instructions, the leap happens. | 2309.03409#38 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 38 | # 5.1 Symbolic Mapping Evaluation
An existing study [71] points out that classification tasks (e.g., document classification, sentiment classification) in textual forms often lack generalization. This is because they often come with very indicative and meaningful category labels. Such labels may laterally appear in the raw training data or popular websites, i.e., SemEval, IMDB [32], and Yelp 10 et al.. This leads a model to over-fit the semantics of the labels instead of inferring them from the new context, while the latter is critical for measuring intelligence as well. Considering this, we use a symbolic mapping method to replace the original category labels with symbols that are unlikely to be seen in the training data. Hence, we can evaluate the LLMsâ language understanding ability as well as the generalization abilities to a
9https://ocw.mit.edu/ans7870/9/9.00SC/MIT9_00SCF11_text.pdf, page 367. 10https://www.yelp.com/dataset/documentation/main
9
# Technical Report of FLM-101B
5 EVALUATIONS INSPIRED BY IQ TESTS
new context. Because the labels are from a given scope, we form our evaluation task as in-context learning with few-shot examples for each label. | 2309.03852#38 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 39 | Finally, Figure 4(b) shows that the pre-trained PaLM 2-L can also serve as the optimizer LLM and improve its own prediction performance. Different from other optimizer LLMs that are instruction- tuned, the pre-trained PaLM 2-L performs better when the prompt is formatted in a few-shot manner. Therefore, we include two initial instructions to start the optimization: the empty instruction (with a training accuracy 32.2) and âThe answer isâ (with a training accuracy 33.3). See Figure 21 in
10
# Large Language Models as Optimizers
(a) PaLM 2-L-IT optimizer (b) pre-trained PaLM 2-L optimizer
Figure 4: Prompt optimization on GSM8K with (a) the text-bison scorer and the PaLM 2-L-IT optimizer, and (b) pre-trained PaLM 2-L as both scorer and optimizer.
Appendix C for the meta-prompt format. The generated instructions follow the same style as âThe answer isâ: most instructions are also phrases suitable as the prefix of a sentence, like âHere you go:â (generated at Step 11 with training accuracy 61.3) and âLetâs do it:â (generated at Step 13 with training accuracy 75.1). | 2309.03409#39 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 39 | Symbolic Mapping Method Instruction Given the premise and hypothesis, determine the relationship between the two sentences. Premise: Kozlowski and the company's former chief financial officer, Mark Swartz, were sentenced, on Monday, to up to 25 years in prison. Hypothesis: Kozlowski was sentenced, Monday, to serve up to 25 years in prison. Answer: <30mFC%4Z> Examples ...... Premise: Note that SBB, CFF and FFS stand out for the main railway company, in German, French and Italian. Hypothesis: The French railway company is called SNCF. Answer: <?V9qP@Rx> Premise: Pibul Songgram was the pro-Japanese military dictator of Thailand during World War 2 Prompt Hypothesis: Pibul was the dictator of Thailand Answer: Traditional Direct Method Instruction Given the premise and hypothesis, determine the relationship between the two sentences. Premise: Kozlowski and the company's former chief financial officer, Mark Swartz, were sentenced, on Monday, to up to 25 years in prison. Hypothesis: Kozlowski was sentenced, Monday, to serve up to 25 years in prison. Answer: entailment Examples. ...... Premise: Note that SBB, | 2309.03852#39 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 40 | Table 4 summarizes top instructions found on GSM8K with different scorer and optimizer LLMs. We observe that:
⢠The styles of instructions found by different optimizer LLMs vary a lot: PaLM 2-L-IT and text-bison ones are concise, while GPT ones are long and detailed.
⢠Although some top instructions contain the âstep-by-stepâ phrase, most others achieve a compa- rable or better accuracy with different semantic meanings.
5.2.2 BBH
On BBH, the optimization starts from an empty string as the initial instruction by default. The instructions are placed at A_begin when the scorer is PaLM 2-L, and at Q_begin when the scorer is text-bison. For each task, we utilize a subset of 20% examples for prompt optimization, and the rest examples are for testing. We show experimental results on more variants of the instruction position and initialization in Appendix E. | 2309.03409#40 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 40 | Hypothesis: Kozlowski was sentenced, Monday, to serve up to 25 years in prison. Answer: entailment Examples. ...... Premise: Note that SBB, CFF and FFS stand out for the main railway company, in German, French and Italian Hypothesis: The French railway company is called SNCF. Answer: not entailment Premise: Pibul Songgram was the pro-Japanese military dictator of Thailand during World War 2. Prompt Hypothesis: Pibul was the dictator of Thailand. Answer: | 2309.03852#40 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 41 | Figure 5 visualizes the per-task accuracy difference on all 23 BBH tasks compared to the instruction âLetâs think step by step.â (Kojima et al., 2022) and the empty instruction, and we present the concrete accuracies in Table 7 of Appendix E. We show that the instructions found by OPRO outperform âLetâs think step by step.â on almost all tasks by a large margin: our instructions outperform by over 5% on 19/23 tasks with the PaLM 2-L scorer, and on 15/23 tasks with the text-bison scorer. Our prompt optimization algorithm also improves instructions from the empty starting point by over 5% on most tasks: 20/23 with the PaLM 2-L scorer and 15/23 with the text-bison scorer.
Similar to GSM8K, we observe upward trends in optimization curves on almost all BBH tasks, as shown in Figure 6. See Figure 23 and 24 in Appendix D for more curves on other BBH tasks.
We next show some examples of instructions found through the course of optimization. On the task ruin_names, starting from the empty instruction (with 64.0 training accuracy), with the text-bison scorer and the PaLM 2-L-IT optimizer, the following instructions are generated: | 2309.03409#41 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 41 | Figure 3: An example of symbolic mapping. The main difference is that the symbolic mapping method replaces the original label with random strings. In this example, we use <30mFC%4Z> and <?V9qP@Rx> to replace entailment and not entailment, respectively.
# 5.1.1 Data Collection
We use the existing benchmark datasets (e.g., SuperGLUE [61], CLUE [74]) as the source and sample up to 300 instances. Then, we replace the original category labels with random strings. Figure 3 shows an example. In this case, the entailment category is replaced by random string <30mFC%4Z> while the not entailment category is replaced by <?V9qP@Rx>. This processing also mitigates the problem that these datasets may contaminate the LLM pre-training data, since both benchmarks are public with lots of reproductions. Table 6 presents the statistics and task types of the rebuilt datasets.
Table 6: Statistics for SuperGLUE-IQ and CLUE-IQ datasets. âWSDâ stands for âWord Sense Disambiguationâ; âSSâ stands for âSentence Similarityâ; âKRâ stands for âKeyword Recognitionâ; coref. stands for âcoreference resolutionâ. BoolQ WiC | 2309.03852#41 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 42 | ⢠âConsider the following when editing artist or movie names humorously:â at Step 1 with training accuracy 72.0;
⢠âWhen making humorous edits of artist or movie names, you can change one or more letters or even create puns by adding new words that sound similar.â at Step 18 with training accuracy 80.0;
⢠âWe can make humorous edits of artist/movie names by changing letters to create new words that are similar in sound but have different meanings. For example, The Police can be changed to The Polite, The Abyss can be changed to Toe Abyss, and Schindlerâs List can be changed to Schindlerâs Lost.â at Step 38 with training accuracy 82.0.
11
# Large Language Models as Optimizers
40 20 i} aouaseyip Adeun20e
60 2 ° + a aouasayip Adeun29e
(a) PaLM 2-L scorer, ours minus âLetâs think step by step.â
(b) PaLM 2-L scorer, ours minus empty starting point
aouarayip Aveuna9e
(c) text-bison scorer, ours minus âLetâs think step by step.â
(d) text-bison scorer, ours minus empty starting point | 2309.03409#42 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 42 | Source RTE WSC AFQMC CSL OCNLI CLUEWSC2020 Samples Task 299 277 300 QA WSD NLI 103 coref. 300 SS 208 KR 300 NLI 300 coref.
# 5.1.2 SuperGLUE-IQ
SuperGLUE is a benchmark dataset used in evaluating the classification ability of various models including LLMs. However, the data is publicly available and many websites have reproduced this dataset. As a result, it is inevitable that the models might have already been trained on it. Thus, we build a new dataset named SuperGLUE-IQ based on the original dataset. Since the answers for the test set of SuperGLUE are not publicly available, we use a validation set here. There are two rules for selecting the sub-tasks: (i) the number of instances exceeds 100; (ii) the classification categories are fixed sets. The building process is detailed in Section 5.1.1. Table 7 lists the performance of FLM-101B and the baselines.
Results. On BoolQ, WiC, and RTE tasks, FLM-101B and GPT-3 perform at the same level, and both outperform GLM-130B. In specific, GPT-3 and FLM-101B are more than 9 points better than GLM10
# Technical Report of FLM-101B
5 EVALUATIONS INSPIRED BY IQ TESTS | 2309.03852#42 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 43 | (c) text-bison scorer, ours minus âLetâs think step by step.â
(d) text-bison scorer, ours minus empty starting point
(d) text-bison scorer, ours minus empty starting point (c) text-bison scorer, ours minus âLetâs think step by step.
Figure 5: On 23 BBH tasks, the accuracy differences among instructions found by prompt opti- mization (with the PaLM 2-L-IT optimizer), âLetâs think step by step.â, and the empty string (optimization starting point).
Although the above instructions are semantically similar, a paraphrase by the optimizer LLM offers a notable accuracy improvement. We further highlight this observation in Section 5.2.3.
Below are some instructions generated when performing prompt optimization on temporal_sequences, starting from the empty instruction (with the training accuracy of 64.0):
⢠âTo solve this problem, we need to first identify the time period when the person was not seen doing anything else. Then, we need to check if the place they went to was open during that time
12
# Large Language Models as Optimizers
(a) BBH ruin_names (b) BBH temporal_sequences | 2309.03409#43 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 43 | # Technical Report of FLM-101B
5 EVALUATIONS INSPIRED BY IQ TESTS
Table 7: Performance on SuperGLUE-IQ of GPT-3, GLM-130B, and FLM-101B. The result of GPT-3 is evaluated by API. GLM-130B is evaluated with its open-sourced checkpoint.
Model Cost (zettaFLOPs) Average BoolQ WiC RTE WSC GPT-3 GLM-130B FLM-101B 376.41 (±53.77) 210.80 28.22 47.60 48.19 46.76 50.84 40.13 49.50 53.33 48.67 50.33 48.38 47.65 48.38 37.86 56.31 38.83 | 2309.03852#43 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 44 | 12
# Large Language Models as Optimizers
(a) BBH ruin_names (b) BBH temporal_sequences
Figure 6: Training accuracy curves of prompt optimization on BBH ruin_names and tempo- ral_sequences with the text-bison scorer and the PaLM 2-L-IT optimizer. The optimizations start from the empty string.
period. If it was, then that is the time period when they could have gone to that place.â at Step 2 with training accuracy 42.0;
⢠âTo find the time period when a person could have gone to a place, identify the time periods when they were not seen doing anything else and the place was open. If there are multiple time periods that match these criteria, then the person could have gone to the place during any of these time periods.â at Step 18 with training accuracy 54.0;
⢠âTo determine the possible time period when a person went to a place, first identify all the time periods when the person was not seen doing anything else and the place was open. Then, rule out any time periods during which the person was seen doing something else. The remaining time periods are the possible times when the person could have gone to the place.â at Step 41 with training accuracy 72.0. | 2309.03409#44 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 44 | 130B on BoolQ. On WSC task, FLM-101B and GPT-3 perform comparably while both perform worse than GLM-130B with about an 18 points gap. The technical report of GLM-130B [80] shows that they use both the WSC and RTE datasets in training. It is interesting to observe that the performance of GLM-130B on the two tasks has such a difference. Since the original label is replaced by a random string, overfitting can be ruled out to a certain extent. We believe that the main reason lies in the structure of language models: GLM-130B contains a bidirectional encoder while FLM-101B and GPT-3 are uni-directional. This feature potentially makes GLM-130B perform better in English coreference resolution tasks, while poor in reasoning-related tasks (e.g., BoolQ). More importantly, the costs of the three models are very different. FLM-101B achieves a comparable performance with GPT-3 under about 1/13 of its computational cost.
# 5.1.3 CLUE-IQ | 2309.03852#44 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 45 | Table 5 presents the best instructions generated on movie_recommendation, ruin_names, and tem- poral_sequences tasks with different combinations of the optimizer and the scorer LLMs. Again, different optimizer LLMs produce instructions of different styles. See Appendix E for results on more BBH tasks.
5.2.3 SEMANTICALLY SIMILAR INSTRUCTIONS MAY ACHIEVE DRASTICALLY DIFFERENT ACCURACIES
One challenge of prompt optimization is the sensitivity of model performance to subtle changes in the instruction. For example, with the PaLM 2-L scorer on the GSM8K test set, âLetâs think step by step.â achieves accuracy 71.8, âLetâs solve the problem together.â has accuracy 60.5, while the accuracy of âLetâs work together to solve this problem step by step.â is only 49.4, although it is the semantic combination of the two upper instructions. This behavior increases both the variance across single-step instructions and the oscillation during optimization, and motivates us to generate multiple instructions at each step to improve the optimization stability.
5.2.4 TRANSFERABILITY OF FOUND INSTRUCTIONS | 2309.03409#45 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 45 | # 5.1.3 CLUE-IQ
CLUE [74] is an open benchmark for Chinese NLP tasks. Similar to SuperGLUE-IQ, we build CLUE-IQ based on the CLUE dataset. Because GPT-3 is unable to handle Chinese well, here we compare FLM-101B with GLM-130B only. There are four tasks to be evaluated, including AFQMC, CSL, OCNLI, and CLUEWSC2020.11 Similar to SuperGLUE-IQ, we follow the same two rules to filter the original CLUE. Table 8 lists the performances of FLM-101B and GLM-130B.
Table 8: Performance on CLUE-IQ for GLM-130B and FLM-101B.
Model Cost (zettaFLOPs) Average AFQMC CSL OCNLI CLUEWSC2020 GLM-130B FLM-101B 210.80 24.54 39.96 42.07 33.33 38.33 53.85 55.29 34.0 27.33 38.67 47.33 | 2309.03852#45 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 46 | 5.2.4 TRANSFERABILITY OF FOUND INSTRUCTIONS
We assess the transferability of found prompts to different datasets of the same domain, where we evaluate the top instructions found for GSM8K on two more math reasoning benchmarks Multi- Arith (Roy & Roth, 2016) and AQuA (Ling et al., 2017). Table 6 shows that our optimized prompts also outperform baseline prompts with different scorer LLMs on these two benchmarks.
5.3 ABLATION STUDIES
We use text-bison as the scorer and PaLM 2-L as the optimizer for all ablation studies. The tasks we evaluate are GSM8K (math reasoning) and BBH sports_understanding (non-math reasoning).
Meta-prompt design. The meta-prompt design is crucial in achieving good prompt optimization performance. We investigate the following core design choices:
13
# Large Language Models as Optimizers
Table 5: Top instructions with the highest accuracies found in prompt optimization on BBH movie_recommendation, ruin_names, and temporal_sequences. | 2309.03409#46 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 46 | Results. On CLUE-IQ, our proposed FLM-101B achieves the best average performance of 42.07. Among the evaluated tasks, FLM-101B outperforms GLM-130B on AFQMC, CSL, and CLUEWSC2020. The results show that FLM-101B has good Chinese ability at the level of 100B parameters. Interestingly, FLM-101B performs better than GLM-130B on Chinese WSC, while worse than GLM-130B on English WSC. In addition, FLM-101B performs worse than GLM-103B on OCNLI. These results suggest that Chinese and English are different in nature and a model excelling in one language may not be good at both. Finally, from a cost-effective perspective, FLM-101B achieves better performance in Chinese at about 12% of the training cost of the counterpart.
# 5.2 Rule Understanding Evaluation | 2309.03852#46 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 47 | Scorer Optimizer Instruction position Instruction movie_recommendation PaLM 2-L PaLM 2-L-IT A_begin PaLM 2-L PaLM 2-L PaLM 2-L A_begin gpt-3.5-turbo A_begin Based on your input, I have analyzed the given movies in terms of genre, plot, tone, audience rating, year of release, director, cast, and reviews. I have also taken into account the given options. The movie that is most similar to the given movies in terms of all these factors is: The best film: Letâs uncover the perfect movie recommendation from the options provided, ensuring an exceptional cinematic experience together as we select the most captivating and satisfying choice that will keep us thoroughly engaged and immersed until the very end. text-bison PaLM 2-L-IT Q_begin What is the highest-rated movie similar to the given movies, with a similar IMDb rating and released in the same year? text-bison gpt-3.5-turbo Q_begin Based on the movie list provided, carefully consider your preferences and make a well-informed decision. ruin_names PaLM 2-L PaLM 2-L-IT A_begin Which is the funniest pun on the artist | 2309.03409#47 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 47 | # 5.2 Rule Understanding Evaluation
Symbolic mapping is able to lighten the negative effects of data overfitting. From a different perspective, we consider understanding rules and executing them according to the given rules is a strong indication of reasoning capability. To this end, we design rule understanding evaluation. Note that, this test is different from reasoning based on the chain of thought. The former focuses on the understanding ability of simple rules (e.g., counting) and performing the right action in a closed setting, while the latter focuses on reasoning ability in an open setting (e.g., different valid reasons for the same conclusion). For example, âcounting an increasing sequence of numbersâ is a typical task for rule understanding evaluation, which can be zero-shot.
Details of Selected Tasks and Data. Counting (0-shot) is the simplest test method for rule under- standing ability. Here, we build a bilingual dataset with 300 randomly generated items and report
11For the details of these tasks, please refer to the original work [74].
11
# Technical Report of FLM-101B
5 EVALUATIONS INSPIRED BY IQ TESTS | 2309.03852#47 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 48 | and make a well-informed decision. ruin_names PaLM 2-L PaLM 2-L-IT A_begin Which is the funniest pun on the artist or movie name? PaLM 2-L PaLM 2-L PaLM 2-L A_begin gpt-3.5-turbo A_begin Answer for ruin: Prepare to have a side-splittingly funny time as we uncover the most clever and hilarious alternatives for these artist or movie names, challenging your wit to guess the correct one with a burst of creativity, humor, and imaginative twists! text-bison PaLM 2-L-IT Q_begin A humorous edit of an artist or movie name can be created by replacing one or more letters to form a new word or phrase that sounds similar but has a different meaning. The new word or phrase should be relevant to the original word, but it should also be a surprise, which makes the edit funny. For example, the artist or movie name "Rocky" can be changed to "Ricky," and "Schindlerâs List" can be changed to "Schindlerâs Lift." Be creative and have fun! text-bison gpt-3.5-turbo Q_begin Choose the option that offers the most clever and humorous | 2309.03409#48 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 48 | 11For the details of these tasks, please refer to the original work [74].
11
# Technical Report of FLM-101B
5 EVALUATIONS INSPIRED BY IQ TESTS
the results on 148 of them with English instructions. A typical example is âLetâs count from 10010 to 10035: 10010, 10011, 10012,â. String replacement (4-shots) is another task that examines the modelâs capacity to edit the text precisely following human intention. We build two sub-tasks: Replace-Word and Replace-Lowercase, each of which contains 300 instances. Each instance starts with a clear instruction: for the âReplace-Wordâ task, it is like âIn the following sentence, replace the specified word with the target word. word to replace: **WQHF** target word: **DFBB**â; for the âReplace-Lowercaseâ task, it is like âFor the following text, please modify all uppercase letters to lowercaseâ. The counting range and words to replace are sampled with a uniform distribution. Table 9 shows the performance of our proposed FLM-101B against GPT-3 and GLM-130B on both counting and string replacement tasks.
Table 9: Performance of FLM-101B, GPT-3, and GLM-130B on rule understanding tasks. | 2309.03852#48 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 49 | Lift." Be creative and have fun! text-bison gpt-3.5-turbo Q_begin Choose the option that offers the most clever and humorous alteration of the given artist or movie name. Let your creativity shine and select the answer that will undoubtedly bring a smile to your face! Make sure to think outside the box! temporal_sequences (no PaLM 2-L as scorer results because its training accuracy on empty string is 100.0) text-bison PaLM 2-L-IT Q_begin To determine the time period when a person went to a Acc 90.8 88.4 88.0 91.6 70.8 88.0 83.6 86.8 83.6 75.2 80.4 | 2309.03409#49 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 49 | Table 9: Performance of FLM-101B, GPT-3, and GLM-130B on rule understanding tasks.
Model Average Counting Replace-Lowercase Replace-Word GPT-3 GLM-130B FLM-101B 86.03 71.49 76.42 82.43 60.81 69.59 80.67 69.67 64.00 95.00 84.00 95.67
Results. On counting task, FLM-101B achieves 69.59%, about 9 points better than GLM-130B. GPT-3 wins the first place in counting and Replace-Lowercase, and second place in Replace-Word. This is potentially because GPT-3 has the largest amount of English training data. This experiment shows that the advantages of each model are varied. Hence, in future work, rule understanding evaluation tasks should cover more scenarios. Finally, considering the cost of each model, the performance of FLM-101B is satisfactory.
# 5.3 Pattern Mining Evaluation
Pattern Mining test is common in IQ tests. In detail, it is the induction and deduction of the patterns emerging in a new context. In general, it is difficult even for humans and is frequently used in intelligence tests. Again, we face the problem that the same test data might have appeared in large quantities, so we also use replacement methods similar to Section 5.1 to alleviate this problem. | 2309.03852#49 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 50 | Q_begin To determine the time period when a person went to a place, first identify all the time periods when the personâs whereabouts are unknown. Then, rule out any time periods during which the person was seen doing something else or the place was closed. The remaining time periods are the possible times when the person could have gone to the place. 80.4
text-bison gpt-3.5-turbo Q_begin Identify the optimal time slot for the individual to engage in the mentioned location/activity considering the given sightings and waking up time, taking into account the opening and closing times of the location and the duration of each event. 53.6
14
# Large Language Models as Optimizers
Table 6: Transferability across datasets: accuracies of top instructions found for GSM8K on Multi- Arith and AQuA. | 2309.03409#50 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 50 | Specifically, we build a benchmark with three tasks (i.e., Head & Tail, Full Repeating, and Head Slicing) for evaluation. Head & Tail is to add a head and a tail to the given input, which should be exactly the same as the ones in the given examples. Regarding Full Repeating, the input sequence should be fully repeated once. For the Head Slicing task, the model needs to return the first fixed number of characters of the input. The number can be inferred from the preceding examples. No instruction or clue is provided except the examples.
# Pattern Mining Evaluation
Pattern Mining Evaluation Head & Tail Full Repeating Head Slicing Input: IHFJd Input: gEdcFa Input: EgldJ Output: JHclIHFJdFgeB Output: gEdcFagEdcFa Output: Eg Input: BEg! Input: IdeBg Input: cgBaE emp Output: JHcIBEgIFgcB Output: IdcBgldcBg Output: eg Input: JIgH Input: dHgFa Input: BoJ Output: JHclJIgHFgcB Output: dHgFadHgFa Output: Be Prompt Input: BEH Input: EgBJ Input: gHdEla Output: Output: Output:
Figure 4: Examples of pattern mining evaluation. | 2309.03852#50 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 51 | Scorer Source Instruction position Instruction Accuracy MultiArith AQuA Baselines PaLM 2-L PaLM 2-L PaLM 2-L (Kojima et al., 2022) (Zhou et al., 2022b) A_begin A_begin A_begin Letâs think step by step. Letâs work this out in a step by step way to be sure we have the right answer. Letâs solve the problem. 85.7 72.8 87.5 44.9 48.4 44.1 PaLM 2-L A_begin (empty string) 69.3 37.8 text-bison text-bison text-bison (Kojima et al., 2022) (Zhou et al., 2022b) Q_begin Q_begin Q_begin Letâs think step by step. Letâs work this out in a step by step way to be sure we have the right answer. Letâs solve the problem. 92.5 93.7 85.5 31.9 32.3 29.9 text-bison Q_begin (empty string) 82.2 33.5 Ours PaLM 2-L PaLM 2-L-IT on GSM8K A_begin Take a deep breath and work on this problem step-by-step. 95.3 54.3 | 2309.03409#51 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 51 | Figure 4: Examples of pattern mining evaluation.
Figure 4 shows examples of these tasks. We sample the input strings, heads, and tails from a uniform distribution. These tasks are actually the âalphabeticalâ versions of the list_functions sub-task of Big-Bench [53]. The original numerical version is so simple that most existing LLMs could achieve 90%+ accuracy. To improve the distinctiveness, we replace the numbers with characters. All these tasks require the model to discover the behavior patterns inside the given examples. Each task is 5-shot and contains 100 instances. Table 10 lists the experimental results of our proposed FLM-101B against GPT-3 and GLM-130B on pattern mining tasks.
12
# Technical Report of FLM-101B
5 EVALUATIONS INSPIRED BY IQ TESTS
Table 10: Performance of FLM-101B, GPT-3, and GLM-130B on pattern mining tasks.
Model Average Head & Tail Full Repeating Head Slicing GPT-3 GLM-130B FLM-101B 70.00 53.00 64.67 61.00 38.00 52.00 92.00 70.00 79.00 57.00 51.00 63.00 | 2309.03852#51 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 52 | 2-L PaLM 2-L-IT on GSM8K A_begin Take a deep breath and work on this problem step-by-step. 95.3 54.3 text-bison PaLM 2-L-IT on GSM8K Q_begin Letâs work together to solve math word problems! First, we will read and discuss the problem together to make sure we understand it. Then, we will work together to find the solution. I will give you hints and help you work through the problem if you get stuck. 96.8 37.8 | 2309.03409#52 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 52 | Results. On all three tasks, FLM-101B outperforms GLM-130B by a large margin. For the head & tail and full repeating tasks, FLM-101B is a few points behind GPT-3, but outperforms the latter on the head slicing task. Considering the computational cost, FLM-101B exhibits noticeable abilities in this area.
# 5.4 Anti-interference Evaluation
Anti-interference capability is critical for finding and utilizing information that is truly related to a specific goal, in an unseen and noisy context (Figure 5). We believe that in addition to generalization, anti-interference is also one of the important principles of AGI. For example, many LLMs will babble when given noisy cues. Another famous hard problem, the cocktail party problem in speech recognition [38], also suggests the importance of the anti-interference ability of intelligent agents. To this end, we conduct this anti-interference evaluation. Figure 5 shows two typical examples of this test.
# Anti-interference Evaluation
# Multiple Key Retrival | 2309.03852#52 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 53 | _
⢠The order of the previous instructions. We compare the following options: (1) from lowest to highest (our default setting); (2) from highest to lowest; (3) random. Figures 7(a) and 7(b) show that the default setting achieves better final accuracies and converges faster. One hypothesis is that the optimizer LLM output is affected more by the past instructions closer to the end of the meta-prompt. This is consistent with the recency bias observed in Zhao et al. (2021), which states that LLMs are more likely to generate tokens similar to the end of the prompt.
⢠The effect of instruction scores. In terms of how to present the accuracy scores, we compare three options: (1) rounding the accuracies to integers, which is equivalent to bucketizing the accuracy scores to 100 buckets (our default setting); (2) bucketizing the accuracies to 20 buckets; (3) not showing the accuracies, only showing the instructions in the ascending order. Figures 7(c) and 7(d) show that the accuracy scores assists the optimizer LLM in better understanding the quality difference among previous instructions, and thus the optimizer LLM proposes better new instructions that are similar to the best ones in the input optimization trajectory. | 2309.03409#53 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 53 | # Anti-interference Evaluation
# Multiple Key Retrival
There is an important info hidden inside a lot of irrelevant text. Find it and memorize them. | will quiz you about the important information there. Here we go. There and back again. Here we go. There and back again. Pass key 1 is 4°(_8bLIB6. Remember it. I4kh-DMSB8y is pass key 2. Here we go. There and back again. Here we go. There and back again. The pass key 1 | told you was Supporting Facts Daniel went back to the office. Daniel travelled to the bathroom Q: Where is Daniel? A: bathroom Sandra journeyed to the kitchen. Daniel journeyed to the bathroom. Q: Where is Sandra? Aâ kitchen Daniel travelled to the hallway. John moved to the office. John went to the bathroom. John travelled to the office. Q: Where is Daniel? A: hallway Examples Daniel went back to the hallway. Daniel travelled to the garden. Sandra went to the office. Sandra journeyed to the kitchen.
Daniel went back to the hallway. Daniel travelled to the garden. Sandra went to the office. Sandra journeyed to the kitchen. Q: Where is Daniel?
# Prompt
# A
# Figure 5: Examples of anti-interference evaluation. | 2309.03852#53 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 54 | ⢠The effect of exemplars. We compare three options: (1) showing 3 exemplars from the task (default); (2) showing 10 exemplars from the task; (3) no exemplars. Figures 7(e) and 7(f) show that presenting exemplars in the meta-prompt is critical, as it provides information on what the task looks like and helps the optimizer model phrase new instructions better. However, more exemplars do not necessarily improve the performance, as a few exemplars are usually sufficient to describe the task. In addition, including more exemplars results in a longer meta-prompt with a dominating exemplar part, which may distract the optimizer LLM from other important components like the optimization trajectory.
The number of generated instructions per step. Computing a mini-batch of gradients reduces the variance of a stochastic gradient descent procedure. Similarly, generating multiple instructions in each step improves the optimization stability with LLMs. On the other hand, to achieve better performance with a fixed budget for the number of instructions to evaluate, the number of per-step instructions should not be too large, so as to allow more optimization steps to incorporate richer information of past instructions with their accuracies. Taking both aspects into consideration, Figure 8
15
# Large Language Models as Optimizers
70.0 5 60.0 g © 0.0 5 0 50 100 150 200 # steps @ ascending (default) @ descending A random | 2309.03409#54 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 54 | # Prompt
# A
# Figure 5: Examples of anti-interference evaluation.
Selected Tasks and Data Collection. We conduct anti-interference evaluation in three task types: multiple key retrievals, single supporting fact tracking, and two supporting facts tracking. Multiple key retrieval is a kind of puzzle that hides some important information (referred to as keys) inside a lot of irrelevant text. If the anti-interference ability of LLMs is not good enough, they will output the wrong or even meaningless words. Even if LLMs pass the first challenge, they may still fail due to multiple relevant noises. We collect a multiple key retrieval dataset in similar formats as those in [7] with at most 3 keys in each instance, exemplified in Figure 5. The single supporting fact tracking and two supporting facts tracking tasks test whether a model can find the chain of supporting facts to answer a question correctly, which is hidden inside a set of irrelevant statements. There are two sub-tasks in the babi-20 [72] benchmark (qa1 and qa2 12) that are aligned with this setting. Thus, we
12We drop qa3 due to the long context length and extraordinary difficulty for all the models
13
# Technical Report of FLM-101B
6 RELATED WORK
directly modify them in a generative format with 3 shots. We randomly sampled 300 questions for each of these three tasks. Table 11 shows the evaluation results on anti-interference. | 2309.03852#54 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 55 | 15
# Large Language Models as Optimizers
70.0 5 60.0 g © 0.0 5 0 50 100 150 200 # steps @ ascending (default) @ descending A random
100.0 50.0 : 9.0 (e) 50 100 150 200 # steps @ ascending (default) e descending Aâ random
# 5 g o
(a) instruction ordering (GSM8K)
(b) instruction ordering (BBH sports_understanding)
70.0 > 3 60.0 8 6 0.0 30.0°> 50 100 +150 +200 # steps @ 100 buckets (default) 20 buckets A. no scores
100.0 5 50.0 o 6 e td + 9.0°5 50 100-150-260 # steps @ 100 buckets (default) C2 20 buckets AV no scores
# (c) instruction scores (GSM8K)
(d) instruction scores (BBH sports_understanding)
70.0
(e) # exemplars (GSM8K) (f) # exemplars (BBH sports_understanding)
Figure 7: Ablation studies: how each part of the meta-prompt matters. The dots are the average values across 3 optimization repetitions, and the shaded regions represent standard deviations.
16
# Large Language Models as Optimizers
(a) GSM8K (b) BBH sports_understanding | 2309.03409#55 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 55 | directly modify them in a generative format with 3 shots. We randomly sampled 300 questions for each of these three tasks. Table 11 shows the evaluation results on anti-interference.
Table 11: Performance of FLM-101B, GPT-3, and GLM-130B on anti-interference evaluation.
Model Average Multiple Key Retrieval Single Supporting Fact Two Supporting Facts GPT-3 GLM-130B FLM-101B 70.11 53.56 60.11 92.67 77.67 89.00 78.33 56.33 59.00 39.33 26.67 32.33
Results. Among all the baselines for this evaluation, FLM-101B achieves the second-best passing rates of 89.00%, 59.00%, and 32.33%, respectively, which is an advantage of about 11%, 3%, and 6% compared to GLM-130B. Considering the computational cost, FLM-101B delivers exciting performance.
In conclusion, on our four additional evaluations inspired by the IQ tests, FLM-101B outperforms GLM-130B and obtains competitive results compared to GPT-3 in some tasks with much lower costs. Except for the impacts of training data, the superiority may be owed to a story that in the growth strategy, the smaller models in early stages refine a more efficient searching space, which keeps taking effect when the model grows larger with increased generalization ability. | 2309.03852#55 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 56 | 16
# Large Language Models as Optimizers
(a) GSM8K (b) BBH sports_understanding
Figure 8: Ablation studies: the number of generated instructions in each step. The dots are the average values across 3 optimization repetitions, and the shaded regions represent standard deviations. The x-axis represents the total number of evaluated instructions through the optimization; e.g., we run 200 optimization steps when sampling 8 instructions in each step, run 400 steps when sampling 4 instructions in each step, etc.
(a) GSM8K, text-bison scorer, Q_begin (b) GSM8K, PaLM 2-L scorer, A_begin
Figure 9: Ablation studies: the initial instructions for prompt optimization. The dots are the average values across 3 optimization repetitions, and the shaded regions represent standard deviations.
compares the optimization performance of sampling 1 / 2 / 4 / 8 (default) / 16 instructions in each step, showing that sampling 8 instructions at each step overall achieves the best performance. | 2309.03409#56 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 56 | # 6 Related Work
Scaling Up Language Models to 100B. The burgeoning advancements in hardware and computa- tional techniques in recent years [47; 52] have laid a robust groundwork for the expansion of language models. The benefits of scaling up LLMs include discernible advantages in language perplexity supported by studies on scaling laws [23; 18; 19; 77], as well as the emergent cognitive competencies in models [69; 4]. | 2309.03852#56 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 57 | compares the optimization performance of sampling 1 / 2 / 4 / 8 (default) / 16 instructions in each step, showing that sampling 8 instructions at each step overall achieves the best performance.
Starting point. We study the effect of different initial instructions for prompt optimization. Our default setting is to start from an empty string when the scorer LLM is (instruction-tuned) text-bison, and to start from either the empty string (on BBH tasks) or âLetâs solve the problem.â (on GSM8K) with instruction position A_begin when the scorer LLM is the (pre-trained) PaLM 2-L. Figure 9(a) shows the performance of text-bison as the scorer LLM with 3 options of initial instructions: (1) the empty string; (2) âSolve the following problem.â; or (3) âSolve the following problem.â and âLetâs solve the problem.â. We observe that the accuracies do not differ much with different starting points. Interestingly, the styles of the generated instructions are also similar. For example, most of the generated instructions starting from (1) and (2) contain the phrase âsolve this problemâ, like âLetâs work together to solve this problem.â in Step 4 with training accuracy 64.8 from
17 | 2309.03409#57 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 57 | In the realm of 100+ billion parameters, examples of closed-source pre-trained LLMs include GPT-3 [3], Gopher [42], and Palm [1]. For closed-source models trained on Chinese data, notable mentions are Ernie 3.0 [63], Pangu-Σ [48], and InternLM [57]. Turning our attention to open-source variants, OPT [81] and BLOOM [49] are among the counterparts to GPT-3; the Llama [58; 59] series strategically operates on a slightly reduced scale (approximately 70B parameters) but amplifies the data to 2T. GLM-130B [80] is an open-source bilingual model with decent performance in both Chinese and English tasks. Nevertheless, the development trajectory and cost of GLM-130B remain largely inaccessible to many academic and industrial entities. FLM-101B is an exemplary paradigm for achieving comparable performance with a relatively small $100K budget. It is our aspiration that this model serves as a catalyst, expediting research advancements and making them more economically feasible in this domain. | 2309.03852#57 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03852 | 58 | Aligning with Humans. Despite the evidence that foundation LLMs present reasoning abilities in zero/few-shot learning and chain-of-thought prompting [3; 70], further refinement is needed to enhance their abilities to follow instructions [68] and align with human preferences [37; 36; 13; 2]. Supervised fine-tuning releases the potential of LLMs to imitate the instruction-following formats and provide human-like responses in dialogical and problem-solving contexts [66; 73; 34; 26]. Meanwhile, policy optimization methods [50; 43] lead LLMs to generate responses that maximize rewards congruent with human preferences, e.g., being helpful and harmless [12].
On the other hand, although these post-training techniques have proven effective and successful in industrial applications, the scaling laws regarding model sizes persist even after alignment with humans: larger models provide more factual and reasonable responses [16], as well as being better calibrated with their confidence probabilities [22]. We hereby release FLM-101B as a large foundation model, making it an accessible starting point for subsequent alignment studies. | 2309.03852#58 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 59 | Figure 9(b) presents the results of of PaLM 2-L as the scorer LLM with the following options of initial instructions: (1) âLetâs solve the problem.â; (2) the empty string; or (3) âLetâs think step by step.â. We notice that the performance differs much more with different initial instructions, especially at the beginning of the optimization. Specifically, starting from (1) leads to better generated instructions than (2) in the first 30 steps, while the instructions optimized from both (1) and (2) are worse than (3) throughout. A similar observation holds when using PaLM 2-L as scorer and gpt-3.5-turbo as optimizer for BBH tasks, by comparing the results starting from the empty string (Appendix E.2) and from âLetâs solve the problem.â (Appendix E.3). Taking a closer look into the optimization process of (2), we find that although both âsolve the problemâ and âstep by stepâ show up in generated instructions at Step 5, it takes the optimizer LLM more steps to get rid of worse instructions presented in the meta-prompt when starting from instructions with lower accuracies. Therefore, one direction for future work is to accelerate convergence from weaker starting points. | 2309.03409#59 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 59 | LLM Evaluation. Widely-used approaches to evaluate LLMs include natural language processing benchmarks [74; 61], commonsense knowledge benchmarks [9; 79; 27], and professional knowledge benchmarks [17; 20]. For chatbots after fine-tuning, automatic and semi-automatic playgrounds are developed to evaluate their human alignment abilities [83]. Although knowledge-oriented ability is
14
# Technical Report of FLM-101B
REFERENCES
important, the results can be substantially impacted by training data and domains. To measure other classes of abilities, existing research like Big-Bench [53] and babi-20 [72] include some sub-tasks relevant to IQ tests, while others still depend more on NLP and knowledge. In this work, we add additional ranges of evaluation in the IQ-test paradigms by re-organizing existing datasets as well as creating new ones where proper.
Model Growth A line of existing work studies the progressive expansion of structures in training Transformer-like models [14; 51; 15; 6; 39; 62; 78]. To our knowledge, FLM-101B presents the first attempt to use a growth strategy to train LLMs in the 100B+ scale. For a more comprehensive summary, please refer to [78].
# 7 Conclusions and Future Work | 2309.03852#59 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 60 | Diversity per step. We evaluate the following temperatures of the optimizer LLM: {0.0, 0.5, 1.0 (default), 1.5, 2.0}. Figure 10 shows the default temperature 1.0 achieves the best performance. Specifically, optimizations with smaller temperatures (0.0 and 0.5) lack exploration and thus creativity, and the optimizer LLM often gets stuck at the same instruction for tens of steps, resulting in flat optimization curves. On the other hand, with larger temperatures (1.5 and 2.0), the optimizer LLM more often ignores the trajectory of previous instructions presented in the meta-prompt and thus lacks exploitation, therefore the optimization curve does not have a steady upward trend. | 2309.03409#60 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 60 | # 7 Conclusions and Future Work
In this paper, we introduce FLM-101B, an open-source LLM that is successfully trained from scratch within a $100,000 budget. The key idea of reducing the training cost of FLM-101B is to utilize the growth strategy to break through the fixed number of model parameters. To fairly evaluate LLMs, we conduct a set of evaluations inspired by IQ tests. We believe that along this pathway, better IQ evaluation methods will continue to emerge in future studies. Experimental results show that FLM-101B outperforms strong baseline models under the same computational cost.
The power of LLMs is very exciting. We believe that LLMs are one of the important possible technical paths to AGI. For the sustainable development of LLMs, we believe that it may be an effective path to construct a basic LLM with strong reasoning capabilities but not a large amount of knowledge (for cost saving), and then expand the knowledge of the LLM in different domains to better support applications. Besides, our exploration on the growth strategy as well as training stability would potentially be beneficial for future attempts of further scaling up LLMs, e.g., beyond 1T parameters.
# Acknowledgments | 2309.03852#60 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 61 | Comparison with one-step instruction generation. Our current iterative procedure runs for multiple steps and generates a new batch of solutions in each step. To validate the importance of leveraging the optimization trajectory for generating new prompts, we compare to a baseline that generates all instructions in a single step without entering into the optimization procedure. We compare these two approaches on GSM8K and BBH sports_understanding with the PaLM 2-L-IT optimizer. For GSM8K the scorer LLM is pre-trained PaLM 2-L and the initial instruction is âLetâs solve the problemâ, and for BBH sports_understanding the scorer LLM is text-bison and the initial instruction is the empty string. The baseline generates 50 instructions in a single step, thus its meta-prompt only includes task exemplars, the initial instruction with its accuracy, and the same meta-instructions as our full meta-prompt for performing optimization. All the other hyperparameters remain the same. | 2309.03409#61 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 61 | # Acknowledgments
This work is supported by the National Key R&D Program of China (2022ZD0116300) and the National Science Foundation of China (NSFC No. 62106249). We would like to thank Hanxiao Qu, Yan Tian, Xigang Cao, Xiaolong Zhang, Kailong Xie and Conghui Guo for their help on computational resources, Quanyue Ma, Hanyu Zhao, Yihui Guo and Jiahong Leng for their help on data, and all other colleaguesâ strong supports for this project.
# References | 2309.03852#61 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 62 | Our results show that this one-step instruction generation performs much worse than our optimization approach. Specifically: (1) On GSM8K, the best instruction among all 50 is still âLetâs solve the problemâ, with a 64.4 training accuracy and a 60.8 test accuracy. On the other hand, our approach (corresponding to Figure 1(a) in the main paper) found âLetâs do the math!â with a 78.2 training accuracy and a 76.3 test accuracy at the 5th step by generating 8 instructions at each step. (2)
18
# Large Language Models as Optimizers
90 70 accuracy âe training â®- validation 50+ 0 50 100 150 200 # steps
80 accuracy a oO âe training â®- validation 40} 0 50 100 # steps
(a) BBH snarks, PaLM 2-L as scorer, PaLM 2-L-IT as optimizer, starting from âLetâs solve the problem.â
(b) BBH sports_understanding, text-bison as scorer, gpt-3.5-turbo as optimizer, start- ing from the empty string | 2309.03409#62 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 62 | # References
[1] Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernández Ãbrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan A. Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark DÃaz, Nan Du, Ethan Dyer, Vladimir Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, and et al. Palm 2 technical report. CoRR, abs/2305.10403, 2023. | 2309.03852#62 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 63 | (b) BBH sports_understanding, text-bison as scorer, gpt-3.5-turbo as optimizer, start- ing from the empty string
Figure 11: Overfitting analysis. The exemplars are splitted to 1/3 training, 1/3 validation and 1/3 test. We compute the validation accuracy every 3 steps. The training/validation dots are the average training/validation accuracies across 3 optimization repetitions, respectively, and the shaded regions represent standard deviations.
Similarly, on BBH sports_understanding, the best instruction among all 50 achieved a 84.0 training accuracy and 80.0 test accuracy. This is again worse than the instruction found by our approach at Step 4, which achieved a 88.0 training accuracy and a 84.5 test accuracy.
5.4 OVERFITTING ANALYSIS IN PROMPT OPTIMIZATION
For simplicity, we do not set aside a validation set in our default setting of prompt optimization. We made this decision based on the experiments when a validation set is present. | 2309.03409#63 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 63 | [2] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Chris Olah, Benjamin Mann, and Jared Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback. CoRR, abs/2204.05862, 2022.
15
Technical Report of FLM-101B
REFERENCES
[3] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020. | 2309.03852#63 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
2309.03409 | 64 | For simplicity, we do not set aside a validation set in our default setting of prompt optimization. We made this decision based on the experiments when a validation set is present.
Overfitting may result in training accuracy being much higher than the validation/test accuracy. It is difficult to avoid overfitting, but overfitting is less harmful when each candidate solution (natural language instruction in the prompt optimization context) overfits to a similar extent. In this case, a higher training accuracy solution still achieves a higher validation/test accuracy, and one can adopt solutions with the highest training accuracies as the final result. Figure 11 shows this is the case for OPRO in prompt optimization: when setting aside a validation set with the same size as the training set, the validation accuracy curves trend up and down alongside the training curves in both prompt optimization settings.
Of course, overfitting still occurs in the instructions found by our prompt optimization: in Table 7 and 10, our training accuracies are often 5%-20% higher than our test accuracies, despite that our test and overall accuracies are still mostly higher than human-written counterparts. Setting aside a larger training set and optimizing for fewer steps (early stopping) may help reduce overfitting.
5.5 COMPARISON WITH EVOPROMPT | 2309.03409#64 | Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at
https://github.com/google-deepmind/opro. | http://arxiv.org/pdf/2309.03409 | Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen | cs.LG, cs.AI, cs.CL | 42 pages, 26 figures, 15 tables. Code at
https://github.com/google-deepmind/opro | null | cs.LG | 20230907 | 20231207 | [
{
"id": "2205.12548"
},
{
"id": "2104.08786"
},
{
"id": "2302.12170"
},
{
"id": "2307.04721"
},
{
"id": "2302.04761"
},
{
"id": "2305.10403"
},
{
"id": "2309.16797"
},
{
"id": "2304.03262"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "2210.17041"
},
{
"id": "2206.08896"
},
{
"id": "2305.17126"
},
{
"id": "2203.07281"
},
{
"id": "2302.03668"
},
{
"id": "2103.10385"
},
{
"id": "2304.12244"
},
{
"id": "2309.08532"
},
{
"id": "2305.03495"
},
{
"id": "2302.14838"
},
{
"id": "2211.01910"
},
{
"id": "2010.15980"
},
{
"id": "2203.11171"
},
{
"id": "2306.13588"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "1608.01413"
},
{
"id": "2209.07686"
},
{
"id": "2012.15723"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "1705.04146"
},
{
"id": "2305.16291"
},
{
"id": "2306.09896"
},
{
"id": "2104.06599"
},
{
"id": "2306.14308"
},
{
"id": "2306.03082"
},
{
"id": "2302.07459"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2303.16749"
},
{
"id": "2303.11366"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2104.08691"
},
{
"id": "2303.03846"
},
{
"id": "2101.00190"
}
] |
2309.03852 | 64 | [4] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg, Harsha Nori, Hamid Palangi, Marco Túlio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with GPT-4. CoRR, abs/2303.12712, 2023.
[5] Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, and Xing Xie. A survey on evaluation of large language models. CoRR, abs/2307.03109, 2023. | 2309.03852#64 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. | http://arxiv.org/pdf/2309.03852 | Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang | cs.CL, cs.AI | null | null | cs.CL | 20230907 | 20230917 | [
{
"id": "2306.15595"
},
{
"id": "1502.05698"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.