id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
2309.12284#20
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Different from forward reasoning, which generates explicit intermediate steps towards the final answer, backward reasoning starts with the answer and generates multiple reasoning steps to predict the masked number. Representative backward reasoning methods include Self-Verification [68] and FOBAR [28]. In Self-Verification (SV) [68], the question with the answer is first rewritten into a declarative statement, e.g., â How much did he pay?â (with the answer 110) is rewritten into â He paid $10â . Then, a question for asking the value of x is appended, e.g., â What is the value of unknown variable x?â
2309.12284#19
2309.12284#21
2309.12284
[ "2302.13971" ]
2309.12284#21
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
. Example 3.3 gives an augmented example. We collect the new questions and their generated reasoning paths with correct answers as the augmented data: DSV = {(Ë q(j) i , Ë r(j) i , Ë a(j) i ) : Ë a(j) i = aâ i ; i = 1, . . . , Nq; j = 1, . . . , KSV}. (3) # Example 3.3: Self-Verification [68] Question Question:
2309.12284#20
2309.12284#22
2309.12284
[ "2302.13971" ]
2309.12284#22
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
James buys x packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. He paid 110. What is the value of unknown variable x? Answer: To solve this problem, we need to determine the value of x, which represents the number of packs of beef that James bought. Each pack of beef weighs 4 pounds and costs $5.50 per pound. The total amount James paid is $110. We can set up the equation as follows: Number of packs of beef * Weight per pack * Price per pound = Total amount paid; x * 4 * $5.50 = $110; ... The value of x is 5.
2309.12284#21
2309.12284#23
2309.12284
[ "2302.13971" ]
2309.12284#23
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Self-Verification needs to rewrite the question with answer into a declarative statement, which is challenging for complex questions. To address this issue, FOBAR [28] proposes to directly append the answer to the question, i.e., â If we know the answer to the above question is {aâ i } , what is the value of unknown variable x?â Example 3.4 shows an example. We collect the new questions along with their correct answers as our augmented data: DFOBAR = {(¯q(j) i , ¯r(j) i , ¯a(j) i ) : ¯a(j) i = aâ i ; i = 1, . . . , Nq; j = 1, . . . , KFOBAR}. (4) # Example 3.4: FOBAR [28] Question Question: James buys x packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. How much did he pay?
2309.12284#22
2309.12284#24
2309.12284
[ "2302.13971" ]
2309.12284#24
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
If we know the answer to the above question is 110, what is the value of unknown variable x? Answer: James buys x packs of beef that are 4 pounds each, so he buys a total of 4x pounds of beef. The price of beef is $5.50 per pound, so the total cost of the beef is 5.50 * 4x = 22x. We are given that the total cost is $110, so we can write: 22x = 110. Dividing both sides by 22, we get: x = 5. The value of x is 5.
2309.12284#23
2309.12284#25
2309.12284
[ "2302.13971" ]
2309.12284#25
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
3.4 FINETUNING OBJECTIVE FUNCTIONS We merge all the augmented data, including answer-augmented data and bootstrapped questions (Rephrasing, Self-Verification, FOBAR) as: DMetaMathQA = DAnsAug â ª Drephrase â ª DSV â ª DFOBAR. (5) We finetune a LLM model (parameterized by θ) on DMetaMathQA to obtain the MetaMath model by maximizing the log likelihood of the reasoning path conditioned on the question, i.e., L(θ) = log P(r | q; θ). (q,r,a)â DMetaMathQA (6) Although we only consider LLaMA-2 here, MetaMathQA can also be used to finetune other LLMs.
2309.12284#24
2309.12284#26
2309.12284
[ "2302.13971" ]
2309.12284#26
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
5 Technical Report Method GSM8K MATH SFT [62] MetaMath â â â â â â â â â â â â â â â â â â â â 41.6 59.6 59.7 60.6 64.4 3.0 4.4 4.4 4.4 5.7 â â â â â â â â â â â â â â â â â â â â 13.8 28.4 30.4 29.1 34.6 4.7 12.9 12.4 15.3 17.7 Table 1: Effect of different question augmentation with LLaMA-2-7B finetuned on GSM8K or MATH. 4 EXPERIMENTS AND RESULTS 4.1 EXPERIMENTAL SETUP Dataset MetaMathQA-GSM8K 80K 75K MetaMathQA-MATH MetaMathQA 155K 80K 50K 130K 40K 40K 15K 15K 55K 55K 240K 155K 395K Table 2: Number of samples in the proposed MetaMathQA. Datasets. We use two popular mathematical reasoning bench- marks: (i) GSM8K [12] is a dataset consisting of high-qual- ity grade school math problems, containing 7,473 training sam- ples and 1,319 testing samples; and (ii) MATH [21] dataset consists of high school math competition problems that span seven subjects including Prealgebra, Algebra, Number Theory, Counting and Probability, Geometry, In- termediate Algebra, and Precalculus. It contains 7,500 and 5,000 samples for training and testing, respectively. Questions in GSM8K [12] take between 2 and 8 steps to reach the answer, while MATH is much more challenging. Models. We use the current state-of-the-art open-source model LLaMA-2 [62], including three different parameter sizes: 7B, 13B, and 70B, as the base model for fine-tuning. GPT-3.5-Turbo is used for rephrasing questions as well as generating answers in all four augmentations, where the temperature is set to 0.7 as in [66].
2309.12284#25
2309.12284#27
2309.12284
[ "2302.13971" ]
2309.12284#27
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
The LLaMA-2-7B and LLaMA-2-13B are trained by fully fine- tuning. LLaMA-2-70B is finetuned by QLoRA [14] for computational efficiency. More experimental details can be seen in Appendix A.2. Baselines. The proposed methods are compared with (i) closed-source models such as GPT-3.5-Turbo [47], PaLM [11]; (ii) open-source models such as LLaMA-1 [61], LLaMA-2 [62]; (iii) Supervised Fine-Tuning (SFT), which uses the training set of the original GSM8K or MATH (iv) Rejection sampling Fine-Tuning (RFT) [69] generates and collects correct reasoning datasets; paths as augmented data for fine-tuning; (v) WizardMath [38] which generates samples and trains two reward models using ChatGPT 1 to select samples for fine-tuning. Diversity Gain. We use the diversity gain [5] to measure to what extent a new dataset added to a basic dataset can improve the overall data diversity. For a base dataset Dbase = {xi = (qi, ri, ai)}N i=1 with N samples, and a new dataset Dnew = {xi = (qi, ri, ai)}M i=1 with M samples, the diversity gain is defined as: Dnew relative to Dbase as: dgain = 1 minxj â Dbase (â ¥f (xi) â f (xj)â ¥2 2), M where f is the feature extractor and we use the OpenAI Embedding API text-embedding-ada-002 for feature extraction. For Figure 2, we change the data size of base data and select a fixed set of 20K new data points that the model has not encountered to form Dnew. 4.2 RESULTS ON GSM8K AND MATH Table 2 illustrates the detailed description of our MetaMathQA collection and Table 3 shows the testing accuracy on GSM8K and MATH. As can be seen, for open-source models with 1-10B parameters, MetaMath achieves the state-of-the-art performance.
2309.12284#26
2309.12284#28
2309.12284
[ "2302.13971" ]
2309.12284#28
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Compared to the previous best LLM, MetaMath achieves a large improvement of 11.6% on GSM8K and 9.1% on MATH in testing accuracy, showing that finetuning on our MetaMathQA data is effective. 1https://openai.com/ 6 Technical Report As for LLMs with 11-50B parameters, the proposed MetaMath performs the best. Par- ticularly, on both GSM8K and MATH, MetaMath achieves higher accuracy than SFT, RFT, and WizardMath by a large mar- gin (+7%), demonstrating the effectiveness of the MetaMath data in improving mathe- matical reasoning ability. Furthermore, for LLMs with 51-70B parameters, again, Meta- Math achieves the highest testing accuracy. Particularly, MetaMath is better than GPT- 3.5-Turbo on GSM8K, which is used for generating augmented data for finetuning. 4.3 EFFECT OF AUGMENTATIONS In this section, we conduct experiments to study the effect of augmentations in Meta- Math. We first finetune the LLaMA-2-7B model on augmented GSM8K (MetaMath- GSM8K) data, and test the finetuned model on GSM8K and MATH. Table 1 shows the testing accuracy of different combina- tions of augmentations. As can be seen, on GSM8K, the models trained on answer augmentation (AnsAug) or rephrasing aug- mentation achieve much higher accuracy than SFT, which is only trained on the training set. Combing answer augmenta- tion and rephrasing augmentation data for fine-tuning leads to a slightly higher accu- racy, which is further improved by about 4% through merging the FOBAR and SV augmentation data. As for MATH, Meta- Math trained only on MetaMahQA-GSM8K data performs better than SFT, suggesting its effectiveness in generalizing to unseen mathematical tasks. We also conduct an experiment by fine- tuning LLaMA-2-7B on the MetaMathQA- MATH data then evaluate the model on GSM8K and MATH. Table 1 shows the testing accuracy. Again, MetaMath trained on AnsAug or rephrasing augmentation data performs much better than SFT.
2309.12284#27
2309.12284#29
2309.12284
[ "2302.13971" ]
2309.12284#29
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Fur- thermore, merging all augmented data to- gether for fine-tuning is better than merg- ing AnsAug and rephrasing augmentation data, demonstrating the effectiveness of SV and FOBAR augmentation data in improv- ing mathematical reasoning ability. More- over, for the unseen GSM8K task, Meta- Math trained on MetaMathQA-MATH data is significantly better than SFT (+20%). # Model # Model # #params # GSM8K MATH closed-source models - - 8B 62B 540B 540B 540B 8B 62B 540B GPT-4 [48] GPT-3.5-Turbo [47] PaLM [11] PaLM [11] PaLM [11] PaLM-2 [2] Flan-PaLM 2 [2] Minerva [31] Minerva [31] Minerva [31] 92.0 80.8 4.1 33.0 56.5 80.7 84.7 16.2 52.4 58.8 42.5 34.1 1.5 4.4 8.8 34.3 33.2 14.1 27.6 33.6 open-source models (1-10B) 11.0 7B 14.6 7B 6.8 7B 6.8 7B 31.2 7B 34.9 6B 32.4 6B 51.6 7B 24.5 7B 41.6 7B 50.3 7B 54.9 7B 66.5 7B LLaMA-1 [61] LLaMA-2 [62] MPT [44] Falcon [51] InternLM [27] GPT-J [63] ChatGLM 2 [71] Qwen [1] Baichuan-2 [3] SFT [62] RFT [69] WizardMath [38] MetaMath 2.9 2.5 3.0 2.3 - - - - 5.6 - - 10.7 19.8 open-source models (11-50B) 17.8 13B 35.6 33B 28.7 13B 42.2 34B 15.2 30B 19.6 40B - 30B 27.6 13B 52.8 13B 50.0 13B 54.8 13B 63.9 13B 72.3 13B LLaMA-1 [61] LLaMA-1 [61] LLaMA-2 [62] LLaMA-2 [62] MPT [44] Falcon [51] GAL [60] Vicuna [10] Baichuan-2 [3] SFT [62] RFT [69] WizardMath [38] MetaMath 3.9 7.1 3.9 6.2 3.1 2.5 12.7 - 10.1 - - 14.0 22.4 open-source models (51-70B) 50.9 65B 56.8 70B 64.8 70B 81.6 70B 82.3 70B LLaMA-1 [61] LLaMA-2 [62] RFT [69] WizardMath [38] MetaMathâ
2309.12284#28
2309.12284#30
2309.12284
[ "2302.13971" ]
2309.12284#30
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
¡ 10.6 13.5 - 22.7 26.6 â Table 3: Comparison of testing accuracy to existing LLMs on GSM8K and MATH. â ¡Due to the computing resource limitation, we finetune MetaMath-70B using QLoRA [14]. 4.4 DISCUSSION FROM A PERPLEXITY PERSPECTIVE According to the Superficial Alignment Hypothesis proposed by Zhou et al. [73], the capability of a model is rooted in pretraining, and data from downstream tasks acts to activate the inherent
2309.12284#29
2309.12284#31
2309.12284
[ "2302.13971" ]
2309.12284#31
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
7 # Technical Report Figure 3: Lower perplexity of MetaMathQA. Figure 4: Accuracy correlates positively with diversity. ability of LLMs that has been learned during pretraining. There are two important questions that arise from such a hypothesis: (i) what kind of data is most effective at activating possible latent knowledge, and (ii) why is one dataset better than another at such activation? Our empirical results suggest that, in the mathematical tasks we consider, our MetaMathQA dataset may serve as a superior activator of mathematical knowledge. Yet, why MetaMath yields superior performance than training on the data of correct answer-only or GSM8K CoT is unclear. We speculate that perhaps it is the simplicity of the data that matters. As shown in Figure 3, we compute the perplexity [41, 64] for the under-finetuned LLaMA-2-7B model, in terms of answer-only data, GSM8K CoT, and the subsections of MetaMathQA data. The perplexity of MetaMathQA is significantly lower than the other two datasets. This highlights its inherently easy-to-learn nature, which may be more conducive to eliciting bolstered problem-solving abilities from an LLM. This is also aligned with the findings with TinyStories [16], where short and easy story data can help LLMs generate content fluently. 4.5 DISCUSSION FROM A DIVERSITY PERSPECTIVE As shown in Figure 2, naively prompting GPT-3.5-Turbo for answer augmentation leads to a clear accuracy saturation. After accuracy saturation, increasing the AnsAug data only yields a limited performance gain. For instance, using 80K answer augmentation data to train a LLaMA-2 7B model leads to a 59.6% accuracy, adding new 20K AnsAug data would only take 0.1% performance gain. This is due to the homogeneity of the additional samples, contributing to a diversity gain of only 0.05 (shown in Figure 4). In comparison, adding the same amount of data generated by question bootstrapping leads to a significant performance boost, which is due to the noticeable diversity gain brought by question bootstrapping.
2309.12284#30
2309.12284#32
2309.12284
[ "2302.13971" ]
2309.12284#32
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
As shown in Figure 4, adding 20K data from Rephrasing, FOBAR, or SV takes an increasing diversity gain, thus causing a 0.4%, 2.3%, and 2.6% accuracy gain, respectively. This experiment demonstrates a positive correlation (the Pearson coefficient is 0.972) between the diversity brought by the bootstrapping methods and accuracy. This is also aligned with the success of MetaMath, which is trained with the diverse MetaMathQA dataset including 4 kinds of data reflecting both the forward and backward reasoning paths. 4.6 EVALUATING THE REVERSAL MATHEMATICAL CAPABILITY The Reversal Curse [4], where LLMs trained from a sentence â A is Bâ are not able to generalize to answer â B is Aâ , also aligns with the observation in this paper that LLMs lack backward mathematical reasoning ability. To evaluate the backward mathematical capability, we propose a GSM8K-Backward test set, including 1270 backward questions by using SV and FOBAR to augment the original GSM8K test set (as shown in Example 3.3 and Example 3.4). Figure 6 shows the accuracy comparison of different 7B mathematical LLMs between the GSM8K and GSM8K-Backward datasets.
2309.12284#31
2309.12284#33
2309.12284
[ "2302.13971" ]
2309.12284#33
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
As can * TE wines FE wi go Ps Ss a < 10. Dewan baiwad <= ae 3 or Nea â OC Eps GLa E] Wratten G] Mann g wo. B wo. 2 a. ee Figure 5: Combing RFT [69] dataset with our MetaMathQA leads to a performance drop. Figure 6: The accuracy gap between GSM8K and GSM8K- Backward. Figure 7: Testing accuracy on questions with short length, medium length and long length. 8 Technical Report
2309.12284#32
2309.12284#34
2309.12284
[ "2302.13971" ]
2309.12284#34
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
be seen, existing LLMs struggle to solve mathematical problems in backward rationales and our MetaMath has a significant improvement on both datasets. Specifically, the ways where different LLMs solve the backward mathematical problem are illustrated through examples in Appendix A.3. 4.7 REASONING PATHS WITH INCORRECT ANSWER CAN ALSO BE USEFUL We conduct experiments on GSM8K using LLaMA-2-7B to study whether the answer augmentation samples with incorrect answers are helpful for finetuning the LLM. We randomly choose 7,473 reasoning paths with incorrect answers from the generated answers, and we ensure that the size is the same as that of the original training set. From Table 4, we observe that the model finetuned on the augmented data with incorrect answers is actually better than SFT, which is counter-intuitive. We hypothesize that although the final answer is incorrect, some intermediate reasoning steps are correct (see Example 4.1). These reasoning steps can still be useful supervision signals. Our results are also aligned with [34], where they discover the importance of intermediate process supervision for reasoning. # Example 4.1: A Reasoning Path with Incorrect Answer Question:
2309.12284#33
2309.12284#35
2309.12284
[ "2302.13971" ]
2309.12284#35
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Tonya is in a hamburger eating contest. Each hamburger is 4 ounces. Last year the winner ate 84 ounces. How many hamburgers does she have to eat to beat last yearâ s winner? (Ground-truth answer is 22) Generated Answer: If each hamburger is 4 ounces and last yearâ s winner ate 84 ounces, then Tonya needs to eat 84/4 = 21 hamburgers to beat last yearâ s winner. The answer is: 21
2309.12284#34
2309.12284#36
2309.12284
[ "2302.13971" ]
2309.12284#36
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
4.8 MORE DATA IS NOT ALWAYS BETTER There are also previous works that augment mathematical reasoning data for fine-tuning [38, 69]. An interesting question is whether combining existing augmented datasets with our MetaMathQA can improve the overall mathematical problem-solving performance. We select the RFT [69] dataset as the external dataset. Figure 5 shows that merging the RFT data into MetaMathQA actually hurts the performance, indicating that the RFT data may not be beneficial to MetaMath. Such a phenomenon is consistently observed in the MetaMathQA dataset under different sizes (from 20K to 100K), and the added RFT dataset is about 47K. The performance drop implies that more augmented data does not always help the generalization. # Example 4.2: Case Study in Test Set of GSM8K
2309.12284#35
2309.12284#37
2309.12284
[ "2302.13971" ]
2309.12284#37
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Question: Darrell and Allenâ s ages are in the ratio of 7:11, If their total age now is 162, calculate Allenâ s age 10 years from now. (Ground-truth answer is 109) SFT Answer: The total ratio representing their ages is 7+11=<<7+11=18>>18. The fraction representing Darrellâ s age is 7/18, and since their total age now is 162, Darrellâ s age is 7/18*162 = <<7/18*162=60>>60. The fraction representing Allenâ s age is 11/18, and since their total age now is 162, Allenâ s age is 11/18*162 = <<11/18*162=84>>84. Ten years from now, Allenâ s age will be 84 + 10 = <<84+10=94>>94.
2309.12284#36
2309.12284#38
2309.12284
[ "2302.13971" ]
2309.12284#38
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
RFT Answer: The total ratio representing Darrell and Allenâ s ages is 7+11 = <<7+11=18>>18. The fraction representing Darrellâ s age from the total ratio is 7/18, and since their total age is 162, Darrell is 7/18*162 = 100 years old. Allenâ s age, from the total ratio, is 11/18, and since Darrell is 100, Allen is 11/18*109 = <<11/18*100=66>>66 years old. Ten years from now, Allen will be 66 + 10 = 76 years old. WizardMath Answer: We know Darrell and Allenâ s ages are in the ratio of 7:11, which means the total ratio representing their ages is 7+11 = 18. If their total age now is 162, we can calculate Darrellâ s age by using the ratio. Specifically, Darrellâ s age is 7/18 of 162, which is 7/18*162 = 49 years old. ... The answer is: 76. Our Answer: The ratio of Darrellâ s age to Allenâ s age is 7:11. Letâ s assume Darrellâ s age is 7x and Allenâ s age is 11x. The total age of Darrell and Allen is 7x + 11x = 18x. We are given that the total age is 162, so 18x = 162. Dividing both sides by 18, we get x = 9.
2309.12284#37
2309.12284#39
2309.12284
[ "2302.13971" ]
2309.12284#39
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Therefore, Allenâ s age is 11x = 11 * 9 = 99. 10 years from now, Allenâ s age will be 99 + 10 = 109. 4.9 ERROR ANALYSIS We have demonstrated that â across multiple scales â our MetaMath models can achieve stellar problem-solving performance. Yet, it is important to consider the characteristics of problems that induce errors in MetaMath and existing open-source mathematical models. In particular, we consider the relationship between question length and model performance. To investigate, we divide the
2309.12284#38
2309.12284#40
2309.12284
[ "2302.13971" ]
2309.12284#40
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
9 Technical Report GSM8K test set into three equally-sized subsets based on the different lengths of questions and calculate the accuracy of the models over each subset. We find in Figure 7 that, MetaMath and related methods struggle under longer questions. However, excitingly, MetaMath always obtains superior performance. We see the study of improving model performance with longer question lengths â for instance, by further augmenting the MetaMathQA dataset â as ripe grounds for future work. # 5 CONCLUDING REMARKS In this paper, we focus on improving the mathematical problem-solving abilities of open-source LLMs. By bootstrapping mathematical questions on GSM8K and MATH, we present a high-quality and diverse dataset MetaMathQA, involving forward reasoning and backward reasoning samples. Our family of LLMs finetuned on MetaMathQA, called MetaMath, have achieved state-of-the-art on mathematical benchmarks among all open-source LLMs. Remarkably, MetaMath-7B reaches 66.5% on GSM8K and 19.8% on MATH, surpassing previous open-source LLMs by a significant margin. Our work further emphasizes the importance of the characteristics of the training data on boosting LLM problem-solving capabilities.
2309.12284#39
2309.12284#41
2309.12284
[ "2302.13971" ]
2309.12284#41
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
# ACKNOWLEDGEMENT The authors would like to sincerely thank Katherine M. Collins from University of Cambridge for her valuable insights and suggestions. # REFERENCES [1] Alibaba. Qwen-7b. Technical Report, 2023. [2] R. Anil, A. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey, Z. Chen, E. Chu, J. Clark, L. Shafey, Y. Huang, K. Meier-Hellstern, G. Mishra, E. Moreira, M. Omernick, K. Robinson, S. Ruder, Y. Tay, K. Xiao, Y. Xu, Y. Zhang, G. Abrego, J. Ahn, J. Austin, P. Barham, J. Botha, J. Bradbury, S. Brahma, K. Brooks, M. Catasta, Y. Cheng, C. Cherry, C. Choquette-Choo, A. Chowdhery, C. Crepy, S. Dave, M. Dehghani, S. Dev, J. Devlin, M. D´ıaz, N. Du, E. Dyer, V. Feinberg, F. Feng, V. Fienber, M. Freitag, X. Garcia, S. Gehrmann, L. Gonzalez, G. Gur-Ari, S. Hand, H. Hashemi, L. Hou, J. Howland, A. Hu, J. Hui, J. Hurwitz, M. Isard, A. Ittycheriah, M. Jagielski, W. Jia, K. Kenealy, M. Krikun, S. Kudugunta, C. Lan, K. Lee, B. Lee, E. Li, M. Li, W. Li, Y. Li, J. Li, H. Lim, H. Lin, Z. Liu, F. Liu, M. Maggioni, A. Mahendru, J. Maynez, V. Misra, M. Moussalem, Z.
2309.12284#40
2309.12284#42
2309.12284
[ "2302.13971" ]
2309.12284#42
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Nado, J. Nham, E. Ni, A. Nystrom, A. Parrish, M. Pellat, M. Polacek, A. Polozov, R. Pope, S. Qiao, E. Reif, B. Richter, P. Riley, A. Ros, A. Roy, B. Saeta, R. Samuel, R. Shelby, A. Slone, D. Smilkov, D. So, D. Sohn, S. Tokumine, D. Valter, V. Vasudevan, K. Vodrahalli, X. Wang, P. Wang, Z. Wang, T. Wang, J. Wieting, Y. Wu, K. Xu, Y. Xu, L. Xue, P. Yin, J. Yu, Q. Zhang, S. Zheng, C. Zheng, W. Zhou, D. Zhou, S. Petrov, and Y. Wu.
2309.12284#41
2309.12284#43
2309.12284
[ "2302.13971" ]
2309.12284#43
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
PaLM 2: Technical Report. Preprint arXiv:2305.10403, 2023. [3] BaichuanInc. Baichuan 2. Technical Report, 2023. [4] L. Berglund, M. Tong, M. Kaufmann, M. Balesni, A. Stickland, T. Korbak, and O. Evans. The Reversal Curse: LLMs Trained onâ A is Bâ Fail to Learnâ B is Aâ . Preprint arXiv:2309.12288, 2023. [5] J.
2309.12284#42
2309.12284#44
2309.12284
[ "2302.13971" ]
2309.12284#44
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Bilmes. Submodularity In Machine Learning and Artificial Intelligence. arXiv:2202.00132, 2022. Preprint [6] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D.
2309.12284#43
2309.12284#45
2309.12284
[ "2302.13971" ]
2309.12284#45
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Amodei. Language Models are Few-Shot Learners. In Neural Information Processing Systems, 2020. [7] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. Such,
2309.12284#44
2309.12284#46
2309.12284
[ "2302.13971" ]
2309.12284#46
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
10 # Technical Report D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. Carr, J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and W.
2309.12284#45
2309.12284#47
2309.12284
[ "2302.13971" ]
2309.12284#47
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Zaremba. Evaluating Large Language Models Trained on Code. Preprint arXiv:2107.03374, 2021. [8] W. Chen, X. Ma, X. Wang, and W. Cohen. Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks. Preprint arXiv:2211.12588, 2022. [9] Y. Chen, R. Zhong, S. Zha, G. Karypis, and H. He.
2309.12284#46
2309.12284#48
2309.12284
[ "2302.13971" ]
2309.12284#48
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Meta-learning via Language Model In-context Tuning. In Annual Meeting of the Association for Computational Linguistics, 2022. [10] W. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. Gonzalez, I. Stoica, and E. Xing. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90% ChatGPT Quality.
2309.12284#47
2309.12284#49
2309.12284
[ "2302.13971" ]
2309.12284#49
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Technical Report, 2023. [11] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. Dai, T. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel. PaLM: Scaling Language Modeling with Pathways. Preprint arXiv:2204.02311, 2022. [12] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, C. Hesse, and J. Schulman.
2309.12284#48
2309.12284#50
2309.12284
[ "2302.13971" ]
2309.12284#50
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Training Verifiers to Solve Math Word Problems. Preprint arXiv:2110.14168, 2021. [13] K. Collins, A. Jiang, S. Frieder, L. Wong, M. Zilka, U. Bhatt, T. Lukasiewicz, Y. Wu, J. Tenen- baum, W. Hart, T. Gowers, W. Li, A. Weller, and M. Jamnik.
2309.12284#49
2309.12284#51
2309.12284
[ "2302.13971" ]
2309.12284#51
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Evaluating Language Models for Mathematics through Interactions. Preprint arXiv:2306.01694, 2023. [14] T. Dettmers, A. Pagnoni, A. Holtzman, and L. Zettlemoyer. Qlora: Efficient finetuning of quantized llms. Preprint arXiv:2305.14314, 2023. [15] J. Devlin, M. Chang, K. Lee, and K.
2309.12284#50
2309.12284#52
2309.12284
[ "2302.13971" ]
2309.12284#52
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In North American Chapter of the Association for Computational Linguistics, 2019. [16] R. Eldan and Y. Li. TinyStories: How Small Can Language Models Be and Still Speak Coherent English? Preprint arXiv:2305.07759, 2023. [17] Y. Fu, H. Peng, L. Ou, A. Sabharwal, and T. Khot.
2309.12284#51
2309.12284#53
2309.12284
[ "2302.13971" ]
2309.12284#53
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Specializing Smaller Language Models towards Multi-Step Reasoning. In International Conference on Machine Learning, 2023. [18] Y. Fu, H. Peng, A. Sabharwal, P. Clark, and T. Khot. Complexity-Based Prompting for Multi- step Reasoning. In International Conference on Learning Representations, 2023. [19] J. Gou, B. Yu, S. Maybank, and D. Tao. Knowledge Distillation: A Survey.
2309.12284#52
2309.12284#54
2309.12284
[ "2302.13971" ]
2309.12284#54
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
International Journal of Computer Vision, 2021. [20] T. He, C. Shen, Z. Tian, D. Gong, C. Sun, and Y. Yan. Knowledge Adaptation for Efficient Semantic Segmentation. In Computer Vision and Pattern Recognition, 2019. [21] D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt.
2309.12284#53
2309.12284#55
2309.12284
[ "2302.13971" ]
2309.12284#55
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Measuring Mathematical Problem Solving With the MATH Dataset. In Neural Information Processing Systems: Datasets and Benchmarks, 2021. [22] G. Hinton, O. Vinyals, and J. Dean. Distilling the Knowledge in a Neural Network. Preprint arXiv:1503.02531, 2015. 11 # Technical Report [23] N. Ho, L. Schmid, and S. Yun. Large Language Models Are Reasoning Teachers. In Annual Meeting of the Association for Computational Linguistics, 2023.
2309.12284#54
2309.12284#56
2309.12284
[ "2302.13971" ]
2309.12284#56
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
[24] C. Hsieh, C. Li, C. Yeh, H. Nakhost, Y. Fujii, A. Ratner, R. Krishna, C. Lee, and T. Pfister. Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes. In Annual Meeting of the Association for Computational Linguistics, 2023. [25] J. Huang, S. Gu, L. Hou, Y. Wu, X. Wang, H. Yu, and J. Han.
2309.12284#55
2309.12284#57
2309.12284
[ "2302.13971" ]
2309.12284#57
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large Language Models Can Self-Improve. Preprint arXiv:2210.11610, 2022. [26] S. Imani, L. Du, and H. Shrivastava. MathPrompter: Mathematical Reasoning using Large Language Models. Preprint arXiv:2303.05398, 2023. [27] InternLM. InternLM: A Multilingual Language Model with Progressively Enhanced Capabilities. Technical Report, 2023.
2309.12284#56
2309.12284#58
2309.12284
[ "2302.13971" ]
2309.12284#58
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
[28] W. Jiang, H. Shi, L. Yu, Z. Liu, Y. Zhang, Z. Li, and J. Kwok. Forward-Backward Reasoning in Large Language Models for Mathematical Verification. Preprint arXiv:2308.07758, 2023. [29] W. Jiang, Y. Zhang, and J. Kwok. Effective Structured-Prompting by Meta-Learning and Representitive Verbalizer. In International Conference on Machine Learning, 2023.
2309.12284#57
2309.12284#59
2309.12284
[ "2302.13971" ]
2309.12284#59
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
[30] N. Kilbertus, G. Parascandolo, and B. Sch¨olkopf. Generalization in anti-causal learning. Preprint arXiv:1812.00524, 2018. [31] A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V. Ramasesh, A. Slone, C. Anil, I. Schlag, T. Gutman-Solo, Y. Wu, B. Neyshabur, G. Gur-Ari, and V. Misra.
2309.12284#58
2309.12284#60
2309.12284
[ "2302.13971" ]
2309.12284#60
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Solving Quantitative Reasoning Problems with Language Models. In Neural Information Processing Systems, 2022. [32] R. Li, L. Allal, Y. Zi, N. Muennighoff, D. Kocetkov, C. Mou, M. Marone, C. Akiki, J. Li, J. Chim, Q. Liu, E. Zheltonozhskii, T. Zhuo, T. Wang, O. Dehaene, M. Davaadorj, J. Lamy- Poirier, J. Monteiro, O. Shliazhko, N. Gontier, N. Meade, A. Zebaze, M. Yee, L. Umapathi, J. Zhu, B. Lipkin, M. Oblokulov, Z. Wang, R. Murthy, J. Stillerman, S. Patel, D. Abulkhanov, M. Zocca, M. Dey, Z. Zhang, N. Fahmy, U. Bhattacharyya, W. Yu, S. Singh, S. Luccioni, P. Villegas, M. Kunakov, F. Zhdanov, M. Romero, T. Lee, N. Timor, J. Ding, C. Schlesinger, H. Schoelkopf, J. Ebert, T. Dao, M. Mishra, A. Gu, J. Robinson, C. Anderson, B. Dolan- Gavitt, D. Contractor, S. Reddy, D. Fried, D. Bahdanau, Y. Jernite, C. Ferrandis, S. Hughes, T. Wolf, A. Guha, L. Werra, and H. Vries.
2309.12284#59
2309.12284#61
2309.12284
[ "2302.13971" ]
2309.12284#61
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
StarCoder: May the Source Be with You! Preprint arXiv:2305.06161, 2023. [33] S. Li, J. Chen, Y. Shen, Z. Chen, X. Zhang, Z. Li, H. Wang, J. Qian, B. Peng, Y. Mao, W. Chen, and X. Yan. Explanations from Large Language Models Make Small Reasoners Better. Preprint arXiv:2210.06726, 2022.
2309.12284#60
2309.12284#62
2309.12284
[ "2302.13971" ]
2309.12284#62
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
[34] H. Lightman, V. Kosaraju, Y. Burda, H. Edwards, B. Baker, T. Lee, J. Leike, J. Schulman, I. Sutskever, and K. Cobbe. Letâ s Verify Step by Step. Preprint arXiv:2305.20050, 2023. [35] W. Liu, B. Dai, A. Humayun, C. Tay, C. Yu, L. Smith, J. Rehg, and L.
2309.12284#61
2309.12284#63
2309.12284
[ "2302.13971" ]
2309.12284#63
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Song. Iterative Machine Teaching. In International Conference on Machine Learning, 2017. [36] W. Liu, Z. Liu, H. Wang, L. Paull, B. Sch¨olkopf, and A. Weller. Iterative Teaching by Label Synthesis. In Neural Information Processing Systems, 2021. [37] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V.
2309.12284#62
2309.12284#64
2309.12284
[ "2302.13971" ]
2309.12284#64
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Stoyanov. RoBERTa: A Robustly Optimized BERT Pretraining Approach. Preprint arXiv:1907.11692, 2019. [38] H. Luo, Q. Sun, C. Xu, P. Zhao, J. Lou, C. Tao, X. Geng, Q. Lin, S. Chen, and D. Zhang. WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct. Preprint arXiv:2308.09583, 2023.
2309.12284#63
2309.12284#65
2309.12284
[ "2302.13971" ]
2309.12284#65
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
12 Technical Report [39] Z. Luo, C. Xu, P. Zhao, Q. Sun, X. Geng, W. Hu, C. Tao, J. Ma, Q. Lin, and D. Jiang. WizardCoder: Empowering Code Large Language Models with Evol-Instruct. Preprint arXiv:2306.08568, 2023. [40] L. Magister, J. Mallinson, J. Adamek, E. Malmi, and A.
2309.12284#64
2309.12284#66
2309.12284
[ "2302.13971" ]
2309.12284#66
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Severyn. Teaching Small Language Models to Reason. In Annual Meeting of the Association for Computational Linguistics, 2023. [41] M. Marion, A. ¨Ust¨un, L. Pozzobon, A. Wang, M. Fadaee, and S. Hooker. When Less is More: Investigating Data Pruning for Pretraining LLMs at Scale. Preprint arXiv:2309.04564, 2023.
2309.12284#65
2309.12284#67
2309.12284
[ "2302.13971" ]
2309.12284#67
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
[42] S. Min, M. Lewis, L. Zettlemoyer, and H. Hajishirzi. MetaICL: Learning to Learn In Context. In North American Chapter of the Association for Computational Linguistics, 2022. [43] S. Mirzadeh, M. Farajtabar, A. Li, N. Levine, A. Matsukawa, and H. Ghasemzadeh. Improved Knowledge Distillation via Teacher Assistant. In AAAI Conference on Artificial Intelligence, 2020. [44] MosaicML.
2309.12284#66
2309.12284#68
2309.12284
[ "2302.13971" ]
2309.12284#68
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs. Technical Report, 2023. [45] E. Nijkamp, B. Pang, H. Hayashi, L. Tu, H. Wang, Y. Zhou, S. Savarese, and C. Xiong. CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis. Preprint arXiv:2203.13474, 2022. [46] OpenAI. GPT-3.5. Technical Report, 2022. [47] OpenAI. GPT-3.5-Turbo. Technical Report, 2022. [48] OpenAI. GPT-4. Technical Report, 2023. [49] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, and R. Lowe. Training Language Models to Follow Instructions with Human Feedback. In Neural Information Processing Systems, 2022.
2309.12284#67
2309.12284#69
2309.12284
[ "2302.13971" ]
2309.12284#69
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
[50] W. Park, D. Kim, Y. Lu, and M. Cho. Relational Knowledge Distillation. In Computer Vision and Pattern Recognition, 2019. [51] G. Penedo, Q. Malartic, D. Hesslow, R. Cojocaru, A. Cappelli, H. Alobeidli, B. Pannier, E. Almazrouei, and J. Launay. The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only. Preprint arXiv:2306.01116, 2023.
2309.12284#68
2309.12284#70
2309.12284
[ "2302.13971" ]
2309.12284#70
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
[52] Z. Qiu, W. Liu, T. Xiao, Z. Liu, U. Bhatt, Y. Luo, A. Weller, and B. Sch¨olkopf. Iterative Teaching by Data Hallucination. In Artificial Intelligence and Statistics, 2023. [53] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever.
2309.12284#69
2309.12284#71
2309.12284
[ "2302.13971" ]
2309.12284#71
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Language Models are Unsupervised Multitask Learners. Technical Report, 2019. [54] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. Liu. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research, 2020. [55] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O.
2309.12284#70
2309.12284#72
2309.12284
[ "2302.13971" ]
2309.12284#72
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Klimov. Proximal Policy Optimization Algorithms. Preprint arXiv:1707.06347, 2017. [56] P. Shen, X. Lu, S. Li, and H. Kawai. Feature Representation of Short Utterances Based on Knowledge Distillation for Spoken Language Identification. In International Speech Communi- cation Association, 2018. [57] K. Shridhar, A. Stolfo, and M. Sachan.
2309.12284#71
2309.12284#73
2309.12284
[ "2302.13971" ]
2309.12284#73
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Distilling Reasoning Capabilities into Smaller Language Models. In Findings of the Association for Computational Linguistics, 2023. [58] A. Talmor, J. Herzig, N. Lourie, and J. Berant. CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge. In North American Chapter of the Association for Computational Linguistics, 2019. 13 # Technical Report [59] R. Taori, I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, P. Liang, and T. Hashimoto.
2309.12284#72
2309.12284#74
2309.12284
[ "2302.13971" ]
2309.12284#74
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Stanford Alpaca: An Instruction-following LLaMA Model. Technical report, 2023. [60] R. Taylor, M. Kardas, G. Cucurull, T. Scialom, A. Hartshorn, E. Saravia, A. Poulton, V. Kerkez, and R. Stojnic. Galactica: A Large Language Model for Science. Preprint arXiv:2211.09085, 2022. [61] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M. Lachaux, T. Lacroix, B. Rozi`ere, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample. LLaMA: Open and Efficient Foundation Language Models. Preprint arXiv:2302.13971, 2023. [62] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Ba- tra, P. Bhargava, S. Bhosale, D. Bikel, L. Blecher, C. Ferrer, M. Chen, G. Cucurull, D. Es- iobu, J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal, A. Hartshorn, S. Hosseini, R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. Koura, M. Lachaux, T. Lavril, J. Lee, D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov, P. Mishra, I. Molybog, Y. Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A.
2309.12284#73
2309.12284#75
2309.12284
[ "2302.13971" ]
2309.12284#75
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Schel- ten, R. Silva, E. Smith, R. Subramanian, X. Tan, B. Tang, R. Taylor, A. Williams, J. Kuan, P. Xu, Z. Yan, I. Zarov, Y. Zhang, A. Fan, M. Kambadur, S. Narang, A. Rodriguez, R. Stojnic, S. Edunov, and T.
2309.12284#74
2309.12284#76
2309.12284
[ "2302.13971" ]
2309.12284#76
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Scialom. LLaMA 2: Open Foundation and Fine-Tuned Chat Models. Preprint arXiv:2307.09288, 2023. [63] B. Wang and A. Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. Technical Report, 2021. [64] P. Wang, L. Li, L. Chen, F. Song, B. Lin, Y. Cao, T. Liu, and Z. Sui.
2309.12284#75
2309.12284#77
2309.12284
[ "2302.13971" ]
2309.12284#77
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Making Large Language Models Better Reasoners with Alignment. Preprint arXiv:2309.02144, 2023. [65] T. Wang, J. Zhu, A. Torralba, and A. Efros. Dataset Distillation. Preprint arXiv:1811.10959, 2018. [66] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery, and D. Zhou.
2309.12284#76
2309.12284#78
2309.12284
[ "2302.13971" ]
2309.12284#78
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models. In International Conference on Learning Representations, 2023. [67] J. Wei, X. Wang, D. Schuurmans, Maarten Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou. Chain of Thought Prompting Elicits Reasoning in Large Language Models. In Neural Information Processing Systems, 2022. [68] Y. Weng, M. Zhu, F. Xia, B. Li, S. He, K. Liu, and J. Zhao.
2309.12284#77
2309.12284#79
2309.12284
[ "2302.13971" ]
2309.12284#79
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large Language Models are Better Reasoners with Self-Verification. Preprint arXiv:2212.09561, 2023. [69] Z. Yuan, H. Yuan, C. Li, G. Dong, C. Tan, and C. Zhou. Scaling Relationship on Learning Mathematical Reasoning with Large Language Models. Preprint arXiv:2308.01825, 2023. [70] X. Yue, X. Qu, G. Zhang, Y. Fu, W. Huang, H. Sun, Y. Su, and W. Chen.
2309.12284#78
2309.12284#80
2309.12284
[ "2302.13971" ]
2309.12284#80
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning. Preprint arXiv:2309.05653, 2023. [71] A. Zeng, X. Liu, Z. Du, Z. Wang, H. Lai, M. Ding, Z. Yang, Y. Xu, W. Zheng, X. Xia, W. Tam, Z. Ma, Y. Xue, J. Zhai, W. Chen, P. Zhang, Y. Dong, and J. Tang.
2309.12284#79
2309.12284#81
2309.12284
[ "2302.13971" ]
2309.12284#81
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
GLM-130B: An Open Bilingual Pre-trained Model. Preprint arXiv:2210.02414, 2022. [72] B. Zhao, K. Mopuri, and H. Bilen. Dataset Condensation with Gradient Matching. In Interna- tional Conference on Learning Representations, 2021. [73] C. Zhou, P. Liu, P. Xu, S. Iyer, J. Sun, Y. Mao, X. Ma, A. Efrat, P. Yu, L. Yu, S. Zhang, G. Ghosh, M. Lewis, L. Zettlemoyer, and O. Levy. LIMA: Less Is More for Alignment.
2309.12284#80
2309.12284#82
2309.12284
[ "2302.13971" ]
2309.12284#82
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Preprint arXiv:2305.11206, 2023. [74] D. Zhou, N. Sch¨arli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, C. Cui, O. Bousquet, Q. Le, and E. Chi. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. In International Conference on Learning Representations, 2023.
2309.12284#81
2309.12284#83
2309.12284
[ "2302.13971" ]
2309.12284#83
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
[75] X. Zhu. Machine Teaching: An Inverse Problem to Machine Learning and an Approach Toward Optimal Education. In AAAI Conference on Artificial Intelligence, 2015. 14 Technical Report A PROMPTS A.1 REPHRASING PROMPTS # Example A.1: Prompt for Rephrasing GSM8K Questions You are an AI assistant to help me rephrase questions. Follow the given examples. Question: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
2309.12284#82
2309.12284#84
2309.12284
[ "2302.13971" ]
2309.12284#84
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Rephrase the above question: What is the amount of money that Olivia has left after purchasing five bagels for $3 each, if she initially had $23? Question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? Rephrase the above question: After losing 23 golf balls on Tuesday and an additional 2 on Wednesday, how many golf balls does Michael have left if he initially had 58 golf balls? Question: Angelo and Melanie want to plan how many hours over the next week they should study together for their test next week. They have 2 chapters of their textbook to study and 4 worksheets to memorize. They figure out that they should dedicate 3 hours to each chapter of their textbook and 1.5 hours for each worksheet. If they plan to study no more than 4 hours each day, how many days should they plan to study total over the next week if they take a 10-minute break every hour, include 3 10-minute snack breaks each day, and 30 minutes for lunch each day?
2309.12284#83
2309.12284#85
2309.12284
[ "2302.13971" ]
2309.12284#85
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Rephrase the above question: Angelo and Melanie need to study 2 chapters in their textbook and 4 worksheets for their upcoming test. They have planned to dedicate 3 hours for each chapter and 1.5 hours for each worksheet. They can study for a maximum of 4 hours each day, taking into account 10-minute breaks every hour, 3 10-minute snack breaks per day, and 30 minutes for lunch. How many days do they need to study in total over the next week to complete their study plan?
2309.12284#84
2309.12284#86
2309.12284
[ "2302.13971" ]
2309.12284#86
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Question: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? Rephrase the above question: If Leah had 32 chocolates and her sister had 42, and they both consumed 35 chocolates, what is the total number of chocolates that they have left? Question: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?
2309.12284#85
2309.12284#87
2309.12284
[ "2302.13971" ]
2309.12284#87
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Rephrase the above question: If there were initially nine computers in the server room and five more computers were added each day from Monday to Thursday, what is the current total number of computers in the server room? Question: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? Rephrase the above question: If Jason initially had 20 lollipops and now has 12 after giving some to Denny, how many lollipops did he give to Denny? Question: Sam bought a dozen boxes, each with 30 highlighter pens inside, for $10 each box. He rearranged five of these boxes into packages of six highlighters each and sold them for $3 per package. He sold the rest of the highlighters separately at the rate of three pens for $2. How much profit did he make in total, in dollars?
2309.12284#86
2309.12284#88
2309.12284
[ "2302.13971" ]
2309.12284#88
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Rephrase the above question: Sam purchased 12 boxes, each containing 30 highlighter pens, at $10 per box. He repackaged five of these boxes into sets of six highlighters and sold them for $3 per set. He sold the remaining highlighters individually at a rate of three pens for $2. What is the total profit he made in dollars? Question: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?
2309.12284#87
2309.12284#89
2309.12284
[ "2302.13971" ]
2309.12284#89
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Rephrase the above question: If there were initially 15 trees in the grove and the grove workers are planning to plant more trees today, resulting in a total of 21 trees, how many trees did the workers plant today? Question: {Q} Rephrase the above question: 15 Technical Report # Example A.2: Prompts for Rewriting Question with Answer into a Declarative Statement You are an AI assistant to help me rewrite question into a declarative statement when its answer is provided. Follow the given examples and rewrite the question. Question:
2309.12284#88
2309.12284#90
2309.12284
[ "2302.13971" ]
2309.12284#90
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
How many cars are in the parking lot? The answer is: 5. Result: There are 5 cars in the parking lot. ... Question: {Q} The answer is: {A}. Result: A.2 EXPERIMENTAL DETAILS Training Details. For the fully fine-tuning setting, we use the AdamW optimizer to train the model with 3 epochs and the batch size is 128. We use 8 NVIDIA A100 GPUs to train the 7B and 13B models, the learning rate is set as 2e-5 with a 3% learning rate warmup. For the 70B model QLoRA fine-tuning, the LoRA rank and alpha are 96 and 16, with a 0.05 dropout between the two matrices. The LoRA matrices are append in both the attention layer and the mlp layer. We use the same AdamW optimizer but with a 1e-4 learning rate and without a learning rate warmup. The Training Prompt 1 are basically from Alpaca [59], where the instruction is replaced by the MetaMathQA question. # Prompt 1: Training Prompt Below is an instruction that describes a task. Write a response that appropriately completes the re- quest. ### Instruction: {instruction} ### Response: # Prompt 2: Evaluation Prompt Below is an instruction that describes a task. Write a response that appropriately completes the re- quest. ### Instruction: {instruction} ### Response: Letâ
2309.12284#89
2309.12284#91
2309.12284
[ "2302.13971" ]
2309.12284#91
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
s think step by step. Evaluation Prompting. Different from the few-shot prompting evaluation for closed-source models, we find that zero-shot prompting is better for finetuned LLMs, which also saves more inference costs. Hence, MetaMath uses the zero-shot Evaluation Prompt 2 for GSM8K and MATH, where the instruction is replaced by the testing question. We set the temperature as 0 for fine-tuned LLaMA model. Answer Extraction. Different from the Wei et al. [67], where they use complex string rules to extract the final answer. In line with WizardMath [38], MetaMath only extracts the string behind The answer is: as the final answer. To teach the model this extraction method, we append The answer is: {gold answer} to the end of answers in the MetaMathQA dataset, where the gold answer is replaced by the respective questionâ s answer.
2309.12284#90
2309.12284#92
2309.12284
[ "2302.13971" ]
2309.12284#92
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
16 Technical Report A.3 HOW DO DIFFERENT LLMS SOLVE REVERSAL MATHEMATICAL PROBLEMS? # Example A.3: Case study about Reverse Mathematical problem Question: A robe takes 2 bolts of blue fiber and x% that much white fiber. It takes a total of 3 bolts. What is the value of unknown variable x? (Ground-truth answer is 50) SFT Answer: We know that the robe takes 2 * x% = 3 bolts. So 2 * x% = 3. So x% = 3/2 = 60%. So x = 60. #### 60. The answer is: 60 RFT Answer: We know that the robe takes 2 bolts of blue fiber and x% that much white fiber, so it takes 2 * 0.01 = 0.02 bolts of white fiber. So in total, it takes 2 + 0.02 = ¡¡2+0.02=2.02¿¿2.02 bolts of blue and white fiber. Since we know that it takes 3 bolts in total, we can write the equation 2.02 = 3. We can solve for x by multiplying both sides by 3, getting 6.04 = 3 * 0.02. We can then write the equation 6.04 = 3, and solve for x by dividing both sides by 6.04, getting x = 0.02. #### 2 The answer is: 2 WizardMath Answer:
2309.12284#91
2309.12284#93
2309.12284
[ "2302.13971" ]
2309.12284#93
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Step 1: Define the variables. Let x% of the blue fiber be represented by the variable y. Then, the robe takes 2y bolts of blue fiber. Step 2: Set up the equation. Since the robe takes a total of 3 bolts, we can write the equation as follows: 2y + xy = 3 Step 3: Simplify the equation. We can simplify the equation by combining the terms with y: 2y + xy = 3 3y + xy = 3 Step 4: Solve for x. To solve for x, we need to isolate xy on one side of the equation. We can do this by subtracting 3y from both sides: 3y + xy - 3y = 3 - 3y xy = 3 - 3y Step 5: Find the value of x. Now, we need to find the value of x. We know that xy = 3 - 3y, and we want to express y in terms of x. To do this, we can divide both sides of the equation by x: (3 - 3y) / x = y Step 6: Solve for y. Now, we can solve for y by multiplying both sides of the equation by x: 3 - 3y = xy 3 - 3y = x(3 - 3y) Step 7: Simplify the equation. We can simplify the equation by expanding and combining like terms: 3 - 3y = 3 - 3y Step 8: Find the value of x. Now, we can see that y = 0, which means that x% of the blue fiber is 0%. Since we are dealing with a percentage, the only possible value for x is 0. Therefore, the value of unknown variable x is 0. Final Answer: x = 0. The answer is: x Our Answer:
2309.12284#92
2309.12284#94
2309.12284
[ "2302.13971" ]
2309.12284#94
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
The robe takes 2 bolts of blue fiber. It takes x% of that much white fiber, which means it takes (x/100) * 2 bolts of white fiber. In total, it takes 3 bolts. So, we can write the equation as: 2 + (x/100) * 2 = 3 Letâ s simplify and solve for x: 2 + (2x/100) = 3 To isolate x, we subtract 2 from both sides of the equation: 2 + (2x/100) - 2 = 3 - 2 (2x/100) = 1 To solve for x, we multiply both sides of the equation by 100/2: (2x/100) * (100/2) = 1 * (100/2) x = 50 The value of x is 50. #### 50 The answer is: 50
2309.12284#93
2309.12284#95
2309.12284
[ "2302.13971" ]
2309.12284#95
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
17
2309.12284#94
2309.12284
[ "2302.13971" ]
2309.10818#0
SlimPajama-DC: Understanding Data Combinations for LLM Training
3 2 0 2 t c O 9 ] L C . s c [ 2 v 8 1 8 0 1 . 9 0 3 2 : v i X r a # SlimPajama-DC: Understanding Data Combinations for LLM Training Zhiqiang Shenâ Tianhua Taoâ ,â ¡ Liqun Maâ Willie Neiswanger§ Joel Hestnessâ ¯ Zhengzhong Liuâ Hongyi Wangâ ® Bowen Tanâ ® # Natalia Vassilievaâ ¯ Daria Sobolevaâ ¯ Eric Xingâ â ¡UIUC §Stanford University â ®CMU â ¯Cerebras Systems # â MBZUAI # Abstract This paper aims to understand the impacts of various data combina- tions (e.g., web text, wikipedia, github, books) on the training of large lan- guage models using SlimPajama. SlimPajama [33] is a rigorously dedupli- cated, multi-source dataset, which has been refined and further dedupli- cated to 627B tokens from the extensive 1.2T tokens RedPajama dataset [7] contributed by Together.
2309.10818#1
2309.10818
[ "2302.13971" ]
2309.10818#1
SlimPajama-DC: Understanding Data Combinations for LLM Training
Weâ ve termed our research as SlimPajama-DC, an empirical analysis designed to uncover fundamental characteristics and best practices associated with employing SlimPajama in the training of large language models. During our research with SlimPajama, two pivotal observations emerged: (1) Global deduplication vs. local deduplication. We analyze and discuss how global (across different sources of datasets) and local (within the single source of dataset) deduplications affect the (2) Proportions of high-quality/highly- performance of trained models. deduplicated multi-source datasets in the combination. To study this, we construct six configurations of SlimPajama dataset and train individual ones using 1.3B Cerebras-GPT [11] model with Alibi [28] and SwiGLU [32]. Our best configuration outperforms the 1.3B model trained on RedPajama using the same number of training tokens by a significant margin. All our 1.3B models are trained on Cerebras 16Ã CS-2 cluster with a total of 80 PFLOP/s in bf16 mixed precision. We further extend our discoveries (such as increasing data diversity is crucial after global deduplication) on a 7B model with large batch-size training. Our models and the separate SlimPajama- DC datasets are available at: link1 and original SlimPajama is at: link2.
2309.10818#0
2309.10818#2
2309.10818
[ "2302.13971" ]
2309.10818#2
SlimPajama-DC: Understanding Data Combinations for LLM Training
# Contents # Introduction 1 2 2.1 Number of Tokens . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Dataset Token Frequency Statistics . . . . . . . . . . . . . . . . . 2.3 Dataset Processing Procedure . . . . . . . . . . . . . . . . . . . . 2.3.1 Low-length Document Filtering . . . . . . . . . . . . . . 2.3.2 Global Deduplication . . . . . . . . . . . . . . . . . . . .
2309.10818#1
2309.10818#3
2309.10818
[ "2302.13971" ]
2309.10818#3
SlimPajama-DC: Understanding Data Combinations for LLM Training
SlimPajama . 3.1 3.2 RefinedWeb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Network Architecture 4.2 Training Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Huggingface Leaderboard Evaluation with Harness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 More Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Training Loss . . . 7B Training Data Combination . . . . . . . . . . . . . . . . . . . 6.1 6.2 7B Model Training Configurations . . . . . . . . . . . . . . . . . 6.3 Fast Training with Large Batch-size . . . . . . . . . . . . . . . . . 6.4 Progressive Training on Weight Decay . . . . . . . . . . . . . . . 6.5 Results of Pre-training and Instruction Tuning . . . . . . . . . . 7.1 RedPajama, SlimPajama and Others. . . . . . . . . . . . . . . . . 7.2 Data Processing and Optimization Approaches . . . . . . . . . . 7.3 Data Combination for Training Large Language Models . . . . . 7.4 Large Batch Training for Large Language Models . . . . . . . . . 4 4 5 5 7 7 8 8 8 9 9 10 10 10 11 13 14 14 14 15 15 16 17 17 17 18 18 19 23
2309.10818#2
2309.10818#4
2309.10818
[ "2302.13971" ]
2309.10818#4
SlimPajama-DC: Understanding Data Combinations for LLM Training
# 8 Conclusion 8 Conclusion A Data Proportion Details B MMLU 19 23 23 # A Data Proportion Details # 1 Introduction The success of modern large-scale models is deeply rooted in their training data. For large language models, the emphasis is not merely on generic text but on â diverse textâ . To guarantee the modelâ s linguistic expertise and its comprehensive understanding of the world, this text must span a broad spec- trum of domains, genres, languages, and more. Consequently, the composition
2309.10818#3
2309.10818#5
2309.10818
[ "2302.13971" ]
2309.10818#5
SlimPajama-DC: Understanding Data Combinations for LLM Training
2 19 23 of the pretraining data domains, such as Github, Wikipedia, books, and web text like CommonCrawl, plays a critical role in the performance of large lan- guage models. In our research, we delve into the domain/source weightings of training data. Leveraging SlimPajama-DC, we investigate two primary areas: (1) global-level and local-level deduplication, and (2) the efficacy of various combinations of thoroughly deduplicated datasets. The first emphasis basi- cally encourages the model to be trained on all sources as no cross-domain overlaps inside, and the second helps us understand how to manage the in- tegration and proportions of diverse domains, especially as datasets for LLM training continue to expand in variety. Generic Deduplication. Multi-source datasets often combine data from var- ious origins, each with its unique distribution of information. When train- ing large language models, handling data redundancy is critical to ensure that the model generalizes well and does not exhibit undue biases, making train- ing faster and more efficient. Highly deduplicated datasets ensure that the model isnâ t repeatedly exposed to the same or very similar data points, mak- ing the training more efficient. Redundant data can slow down convergence and might make the model overfit to frequently seen patterns. Deduplication helps in efficient utilization of the modelâ
2309.10818#4
2309.10818#6
2309.10818
[ "2302.13971" ]
2309.10818#6
SlimPajama-DC: Understanding Data Combinations for LLM Training
s capacity. In general, deduplication is the process of removing duplicate data to address this redundancy. Global Deduplication vs. Local Deduplication. The global deduplication pro- cess removes duplicates from the entire combined datasets. When weâ re using data from multiple sources, there might be overlaps across sources. Global deduplication identifies and removes these overlapping instances irrespective of their source. In local deduplication, duplicates are removed within each in- dividual source dataset before merging them. However, if two source datasets have overlapping data, those duplicates will still be present in the final com- bined dataset since deduplication was only done locally within each dataset. In most current open-source LLM training data [7, 36, 38], only local dedupli- cation is performed within each data source, which neglects the redundancy across the different sources. Given the effects, global deduplication performed in SlimPajama is generally preferable for training large language models, es- pecially when using multi-source datasets. It ensures a balanced representa- tion of information and prevents the pitfalls associated with data redundancy. However, more hardware memory is naturally required by this strategy. Different Combinations of Highly-deduplicated Datasets. A model trained on diverse data is more likely to generalize well across various tasks.
2309.10818#5
2309.10818#7
2309.10818
[ "2302.13971" ]
2309.10818#7
SlimPajama-DC: Understanding Data Combinations for LLM Training
Itâ s ex- posed to a wider range of vocabulary, syntax, and semantics, enabling it to handle a broad scope of queries. If diverse sources are chosen such that they represent different cultures, beliefs, and demographics, the model might be more balanced and less prone to biases. However, if many sources share com- mon biases, the final dataset might amplify them. Different sources can pro- vide both a breadth and depth of knowledge on various topics. Combining a technical dataset with a general news dataset, for example, would allow the model to understand both in-depth technical details and broad general knowl- edge.
2309.10818#6
2309.10818#8
2309.10818
[ "2302.13971" ]
2309.10818#8
SlimPajama-DC: Understanding Data Combinations for LLM Training
Itâ s crucial to note that data quality often outweighs the quantity. In this 3 work, we aim to shed light on this fascinating perspective of comprehensive data combination on SlimPajama. Specialization vs. Generalization Trade-off. In general, combining many spe- cialized datasets can lead to a jack-of-all-trades model, which might not be as adept at specific tasks as a model trained on a specialized dataset. While the model can tackle a wide range of tasks, it might not have the depth of un- derstanding that a specialized model might have for a particular domain. In this study, we also explore specialization and generalization ability using both individual and combined data sources.
2309.10818#7
2309.10818#9
2309.10818
[ "2302.13971" ]
2309.10818#9
SlimPajama-DC: Understanding Data Combinations for LLM Training
The remainder of this paper is organized as follows. In Section 2, we elabo- rate the details of dataset statistics, token distributions, and data processing procedure. Section 3 describes dataset combination configurations for this SlimPajama-DC study. Our model architecture and training details are pro- vided in Section 4, followed by the results and analysis in Section 5 on the range of various tasks in the zero- and few-shot settings. Section 6 presents an application of efficient Large Batch-size (LBS) training on a 7B model. Section 7 reviews related work and Section 8 concludes this study. # 2 Dataset Overview # 2.1 Number of Tokens SlimPajama has a total of 627B tokens across different domains, as shown in Ta- ble 1. It includes validation and test sets with 500M tokens each, and these have been cleaned to ensure no overlap with the training data. For the SlimPajama- DC study, our entire training dataset for each configuration contains 330B to- kens after tokenization which is carefully selected from the original SlimPa- jama dataset. We tested different sampling strategies for different domains of our training data: (1) each token is trained only once during training, such as Commoncrawl, and (2) we perform more than one epoch for training on partic- ular sources, such as the Wikipedia and Github domains. The detailed domain source proportions of various combinations are shown in Table 3.
2309.10818#8
2309.10818#10
2309.10818
[ "2302.13971" ]
2309.10818#10
SlimPajama-DC: Understanding Data Combinations for LLM Training
SlimPaj. RedPaj. LLaMA-1 RefinedWeb GPT3 MassiveText 52.2% 26.7% 5.2% 4.2% 4.6% 3.8% 3.3% 0.0% 0.0% 0.0% 637B Table 1: Data source proportions for various datasets. 4 # 2.2 Dataset Token Frequency Statistics To examine the similarity between various datasets in SlimPajama, we calcu- late the KL divergence between two domain distributions of token counts from different datasets, as shown in Fig. 1a. Given that distinct datasets may em- phasize dissimilar token types, we subsequently delve into the differences in the distribution of these datasets across token subsets exhibiting distinct char- acteristics: (1) Tokens exclusively comprising letters (Fig. 1b); (2) The union set of tokens with the top 1000 frequencies on each dataset (Fig. 1c); (3) Numbers and commonly used operators, like â
2309.10818#9
2309.10818#11
2309.10818
[ "2302.13971" ]
2309.10818#11
SlimPajama-DC: Understanding Data Combinations for LLM Training
30â , â +â and â =â (Fig. 1d); (4) Whitespace Tokens, like â â and â â (Fig. 1e); (5) Non-alphanumeric tokens, like â #â and â ====â (Fig. 1f). There exists a degree of similarity in the distribution of different token sub- sets among RefinedWeb, Book, C4, and CommonCrawl, as well as between Github and StackExchange. Notably, when it comes to the distribution of non- alphanumeric tokens, Arxiv differs significantly from most datasets. While on the distribution of whitespace tokens, Refinedweb shows notable distinctions in comparison to Github and StackExchange. Among numbers and commonly used operators, the distribution of all datasets is relatively consistent. # 2.3 Dataset Processing Procedure SlimPajama was created by filtering low-length documents and applying Min- HashLSH deduplication to the 1.2T token RedPajama dataset to reduce it to 627B tokens. RefinedWeb [27] shows that training on deduplicated data im- proves training compute efficiency and decreases the chance of LLMs gen- erating memorized text from the dataset. By removing duplicate and low- length examples, it ultimately improves the training compute efficiency and model performance. The overview of SlimPajama preprocessing pipeline is shown in Fig. 2 and the preprocessing code is under https://github.com/ Cerebras/modelzoo. Data source Commoncrawl C4 GitHub Books ArXiv Wikipedia StackExchange Total 0.02% 4.7% 0.0% 0.0% 0.62% 0.0% 0.32% 1.86% 63.76% 6.85% 46.16% 2.01% 0.06% 2.24% 0.20% 49.60% # Document filter rate Byte duplication rate Table 2: Document low-length filter rates and data source byte duplication rates. 5 slimp). CommosemPl;- 0.00 0.08 0.07 0.22 Slimpj.C4- 0.08 0.00 0.04 0.23 RefinedWeb - 0.05 0.03 0.00 0.21 Slimpj.Book - 0.25 Slimp).
2309.10818#10
2309.10818#12
2309.10818
[ "2302.13971" ]
2309.10818#12
SlimPajama-DC: Understanding Data Combinations for LLM Training
StackExchange Slimpj.Github Slimpj.Wikipedia Slimpj.Arxiv 3.40 2.69 10 6 2 slimpj Commonnpi;- 0.00 0.08 0.05 0.19 Slimpj.C4- 0.08 0.00 0.02 0.20 RefinedWeb - 0.05 0.03 0.00 0.18 Slimp) Book - 0.22 slimp) StackExchange 0.00 0.48 0.40 0.00 Slimpj.Github Slimpj. Wikipedia Slimpj.Arxiv yp es ee wer & & es os aE SF FF SEF ES see FF ⠬ SES SS °f LS OTL SS ec Cs Ss
2309.10818#11
2309.10818#13
2309.10818
[ "2302.13971" ]
2309.10818#13
SlimPajama-DC: Understanding Data Combinations for LLM Training
(a) All Tokens (b) Tokens Composed of Letters slimpi â CommonCrawi ~ Oe 0.06 0.12 Slimpj.C4- 0.05 0.00 0.05 0.17 RefinedWeb - 0.03 0.02 0.00 0.13 Slimpj.Book - slimp) StackExchange Slimpj.Github Slimp). Wikipedia Slimpj.Arxiv be 2S + Se eas ss § & ye S&F aE FE @SE S SSF § SF SES SS as CS TES &* Ss se gS s Slip) â CommonCrawi ~ °° 0.08 0.03 0.19 Slimpj.C4- 0.07 0.00 0.04 0.08 RefinedWeb - 0.03 0.04 0.00 0.13 Slimpj.Book - 0.13 0.07 0.10 0.00 slimp) StackExchange Slimpj.Github Slimp).Wikipedia - 0.14 Slimp).Arxiv (c) Top 1000 Tokens (d) Numbers and Commonly Used Operators sli commosliâ ¢?l;- 0.00 0.29 oxo slimpj.c4- 025 0.00 038 Renecwed 077 0.19 Slimpj.Book - 0.37 slimp) StackExchange Slimpj.Github Slimp).Wikipedia - 0.25 0.20 SlimpjArxiv - 0.11 0.40 0.00 â â be 2 + RS o 3 ol ey Ss ge S&F as ££ aSF SF SSF IF EF SES SS ef LS FES SS eo & s Slimp) â
2309.10818#12
2309.10818#14
2309.10818
[ "2302.13971" ]
2309.10818#14
SlimPajama-DC: Understanding Data Combinations for LLM Training
CommonCrawi ~ °-°° 0.08 0.08 0.20 Slimpj.C4- 0.07 0.00 0.06 0.21 RefinedWeb - 0.07 0.08 0.00 0.30 Slimpj-Book - 0.30 0.37 0.49 0.00 slimp) StackExchange 0.00 0.20 0.18 0.00 Slimpj.Github Slimp).Wikipedia Slimp).Arxiv (e) Whitespace Tokens (f) Non-Alphanumeric Tokens Figure 1: Confusion matrix using KL divergence between the distributions of token statistics for different datasets.
2309.10818#13
2309.10818#15
2309.10818
[ "2302.13971" ]
2309.10818#15
SlimPajama-DC: Understanding Data Combinations for LLM Training
6 oH Nec |+{ Clean Books |-[ nrc |-[ clean Global Dedup H }4] interleave Docs |) Shuffle Docs) Train/Holdout [+ Github |] nec |-] clean Deduplication Train/Holdout Anv L{ nrc |-{ clean Upsamplel omens |-| Same |-{ aot |: with weights racking a Train Holdout â Sequence Oe Test up Eval Figure 2: SlimPajama preprocessing pipeline. # 2.3.1 Low-length Document Filtering Additional global filtering is performed to remove short, low-quality docu- ments. After removing punctuation, consecutive spaces, newlines, tabs, and leading or trailing escape characters, documents with less than 200 characters were further filtered out. These documents typically contain only metadata and no useful information. A low-length filter was applied to every corpora other than Books and GitHub where it was found useful for short documents. The percentage of documents filtered out from each corpus within the SlimPa- jama dataset is detailed in Table 2. In total, this additional step removed 1.86% of the documents. # 2.3.2 Global Deduplication When building SlimPajama, it is observed that every corpus included in it contained duplicates with the most significant duplication found in Common- Crawl and GitHub. RefinedWeb [27] also found similar rates of deduplica- tion in the CommonCrawl data. It is most common to perform deduplication within each dataset source separately [36, 7, 42, 13] to reduce implementation complexity and meet resource constraints. This local deduplication approach does not have the ability to remove overlap between data sources which can be significant for web-scraped data. Instead, global deduplication removes du- plication within and between each data source. Following [4, 27, 1, 31], global- level deduplication is performed using MinHashLSH algorithm. To facilitate global deduplication efforts and reproducibility for other researchers, a tool designed for scalable performance is offered under the above link. Specifically, global MinHashLSH deduplication is performed using a Jac- card similarity threshold of 0.8, document signatures constructed with prepro- cessed lowercase 13-grams, and schema following [22].
2309.10818#14
2309.10818#16
2309.10818
[ "2302.13971" ]
2309.10818#16
SlimPajama-DC: Understanding Data Combinations for LLM Training
To unify a representa- tion of the same content, punctuation, consecutive spaces, newlines, tabs, and leading or trailing escape characters are removed. The level of deduplication 7 performed per data source is presented in Table 2. The initial implementation of MinHashLSH did not scale to trillion token datasets like RedPajama with- out running out of memory. This is overcome by optimizing the memory usage and parallelization to perform deduplication on 64 CPU cores with 1.4TB GB peak memory usage, which can be easily decreased by creating multiple Min- HashLSH objects to query. # 3 Dataset Combination Configurations # 3.1 SlimPajama Combination Strategies.
2309.10818#15
2309.10818#17
2309.10818
[ "2302.13971" ]
2309.10818#17
SlimPajama-DC: Understanding Data Combinations for LLM Training
As shown in Table 3, the adjusted domain weights establish a new training distribution. Using this distribution, we adopt a stan- dard training approach to learn a consistent model architecture. This archi- tecture remains unchanged across various domain weights and is trained us- ing data from diverse combination distributions. Across different setups, we maintain the total training tokens to be the same. Our examination of domain weights in large language model training focuses on three main areas: 1) In- crementally increasing the diversity of source combinations, as seen in con- figurations 1, 2, and 3. 2) With consistent data sources, we explore varying domain proportions as presented in configurations 2, 4, and 5. 3) We assess the significance of individual domain sources concerning the final modelâ
2309.10818#16
2309.10818#18
2309.10818
[ "2302.13971" ]
2309.10818#18
SlimPajama-DC: Understanding Data Combinations for LLM Training
s perfor- mance. Note that given the minimal impact of ArXiv and StackExchange, we have opted to omit them from the ablations in configuration 3 to conserve train- ing resources and keep relatively sufficient training tokens for CommonCrawl. The detailed configurations are as follows: â ¢ Configuration-1: 330B CommonCrawl â ¢ Configuration-2: 300B CommonCrawl + 30B Github â ¢ Configuration-3: 250B CommonCrawl + 30B Github + 26B Books + 24B Wikipedia â ¢ Configuration-4: 250B CommonCrawl + 80B Github (adjust sampling proportion) â ¢ Configuration-5: 250B CommonCrawl + 80B Wikipedia (adjust sampling proportion) â ¢ Configuration-6: 330B RefinedWeb CommonCrawl # 3.2 RefinedWeb RefinedWeb [27] is a massive English web dataset that is constructed using rigorous filtering and extensive deduplication of CommonCrawl. We use it as the comparison to our SlimPajama-DC CommonCrawl-only training.
2309.10818#17
2309.10818#19
2309.10818
[ "2302.13971" ]
2309.10818#19
SlimPajama-DC: Understanding Data Combinations for LLM Training
8 SlimPajama RefinedWeb Total (Tokens) sub dataset Commoncrawl C4 GitHub Books ArXiv Wikipedia StackExchange Commoncrawl DC-2 DC-3 DC-4 DC-5 DC-1 100.0% 90.9% 75.8% 75.8% 75.8% 0.0% 0.0% 0.0% 0.0% 9.1% 24.2% 0.0% 0.0% 0.0% 0.0% 7.9% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 24.2% 7.3% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 0.0% 0.0% 0.0% 330B 330B 330B 330B 330B DC-6 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 9.1% 0.0% 0.0% 0.0% 0.0% 0.0% 330B Table 3: Six configurations of sub-dataset combinations in SlimPajama. # 4 Network Architecture and Training Details # 4.1 Network Architecture Cerebras-GPT Architecture [11]. Cerebras-GPT architecture shares similari- ties with those built on GPT-3 [4], particularly in the use of an autoregressive transformer decoder. However, a key difference lies in the attention mecha- nism employed. While GPT-3 utilizes a mix of dense and sparse-banded atten- tion, Cerebras-GPT consistently uses dense attention across all decoder blocks. In terms of model dimensions, we either adhere to an aspect ratio of approxi- mately 80 (dmodel/nlayers) or maintain dimensions that are congruent with GPT- 3 models. Additionally, all of our models are trained to handle a maximum sequence length of 2,048 tokens.
2309.10818#18
2309.10818#20
2309.10818
[ "2302.13971" ]
2309.10818#20
SlimPajama-DC: Understanding Data Combinations for LLM Training
The detailed architecture is shown in Table 4. Alibi [28]. Alibi introduces a more streamlined and efficient positional ap- proach called Attention with Linear Biases. Rather than adding positional em- beddings to word embeddings, ALiBi applies a bias to query-key attention scores, penalizing them based on their distance. SwiGLU [32]. SwiGLU is an activation function which is a variant of GLU [9]. The formulation is as follows: SwiGLU(x, W, V, b, c, β) = Swishβ(xW + b) â (xV + c) (1) where x is a vector of the hidden representation at a particular position in the sequence. W, V, b, c are the matrices and bias vectors, respectively. Model GPT-3 XL Our DC GPT-3 LLaMA Our LBS n params 1.3B 1.3B 6.7B 6.7B 6.7B n layers d model 24 24 32 32 32 2,048 2,048 4,096 4,096 4,096 n heads d heads 24 24 32 32 32 128 128 128 128 128 batch size 1M 2M 2M 4M 14.3M learning rate 2.0à 10-4 1.2à 10-2 1.2à 10-4 3.0à 10-4 1.8à 10-4 Table 4: Detailed model sizes, architectures, and optimization hyper- parameters. Our LBS model details are presented in Sec. 6.
2309.10818#19
2309.10818#21
2309.10818
[ "2302.13971" ]
2309.10818#21
SlimPajama-DC: Understanding Data Combinations for LLM Training
9 # 4.2 Training Details Tokenizer. We use an adapted GPT-NeoX [2] BPE-based tokenizer similar to that used in GPT-2 for all of our experiments, which has a vocabulary size of 50,277. Our entire training dataset for each configuration contains 330B tokens after tokenization, and each model takes about 2.5 days on Cerebras 16à CS-2S cluster. Optimizer. We employ the AdamW optimizer [26] to train our models, adopt- ing these specific hyper-parameters: β1 = 0.9, β2 = 0.95, and eps = 1.0e-08. Our chosen learning rate follows a linear scheduler, culminating in a final learning rate thatâ s 10% of its peak value. Additionally, we apply a weight decay of 0.1, limit the gradient using a clip value of 1.0, and implement a 150-step warmup. Other Hyperparameters. In our model, the filter size is 5,461, hidden size is 2,048 and attention dropout rate is 0. SwiGLU is used as the nonlinearity and alibi is used for position embedding. Mixed precision and bfloat16 are employed during model training. More hyperparameters are shown in Table 4. # 5 Results and Analysis This section presents the analytical experiments and results on different com- binations of SlimPajama. We first discuss the results following Huggingface Leaderboard Evaluation. Then, we demonstrate the importance of global dedu- plication and a diverse range of data sources in enhancing LLMâ s performance by conducting additional comprehensive evaluations across various topics. Fi- nally, we visualize the training loss curves of different data domain combina- tions and provide insights on how they connect to the modelsâ performance. # 5.1 Huggingface Leaderboard Evaluation with Harness Following the Huggingface Leaderboard Evaluation [12], we also assess our models on four key benchmarks using the Eleuther AI Language Model Eval- uation Harness [14]. This unified framework facilitates the evaluation of gen- erative language models across a broad scope of tasks. Specifically, our tests comprised: 1) AI2 Reasoning Challenge (25-shot) [6]:
2309.10818#20
2309.10818#22
2309.10818
[ "2302.13971" ]
2309.10818#22
SlimPajama-DC: Understanding Data Combinations for LLM Training
This entails a series of grade-school level science questions. 2) HellaSwag (10-shot) [41]: This benchmark gauges commonsense inference. While straightforward for humans, with an average accuracy of 95%, it poses challenges for state-of-the-art models. 3) MMLU (5-shot) [16]: Designed to assess a text modelâ s multitask proficiency, this test spans 57 diverse tasks, including elementary mathematics, US history, computer science, law, among others. 4) TruthfulQA (0-shot) [23]: This evaluates a modelâ s inclination to echo inac- curate information frequently encountered online. However, itâ s pertinent to 10 note that within the Harness, TruthfulQA is essentially a 6-shot task, as it con- sistently commences with six examples, even when initialized with zero for the number of few-shot examples. As shown in Table 5, with the exception of DC-5, our average results are all better than RedPajama-1.3B which is also trained on 330B tokens. Among our combinations, the DC-1 (which relies solely on SlimPajama Commoncrawl) achieves the highest scores for ARC and MMLU among all tested configura- tions. Yet, its performance on TruthfulQA ranks at the bottom. On the other hand, DC-3 obtains the top average accuracy across all SlimPajama data com- binations, while DC-6 stands out with the best results on HellaSwag and supe- rior average performance across the board. A potential strategy to harness the strengths of each configuration might involve a sequential training process on DC-1, DC-3, and DC-6. Furthermore, SlimPajama is built using global deduplication across all sources. This suggests that merging all domains typically yields better results than se- lective combinations, given the absence of overlaps among different domain datasets. This also highlights the importance of global deduplication and a diverse range of data sources in enhancing LLM overall performance.
2309.10818#21
2309.10818#23
2309.10818
[ "2302.13971" ]
2309.10818#23
SlimPajama-DC: Understanding Data Combinations for LLM Training
Model Cerebras-GPT-1.3B [11] GPT-neo-1.3B [3] RedPajama-1.3B [7] DC-1-1.3B DC-2-1.3B DC-3-1.3B DC-4-1.3B DC-5-1.3B DC-6-1.3B Average ARC HellaSwag MMLU TruthfulQA 33.5 36.0 38.0 38.5 38.4 38.6 38.5 37.6 41.0 26.3 31.2 37.2 36.3 33.9 34.7 35.2 33.4 35.1 38.5 48.5 55.8 56.0 55.5 56.0 54.7 53.3 64.7 26.6 24.8 24.9 27.0 25.7 25.6 25.7 26.0 26.2 42.7 39.6 34.3 34.8 38.6 38.0 38.3 37.6 37.9 Table 5: Results of six dataset combination configurations following Hugging- face Leaderboard Evaluation [12] with Harness [14]. # 5.2 More Evaluations As shown in Table 6, we present additional evaluations across various domains to investigate the fine-grained capabilities offered by different data combina- tions. Except for DC-6 (model trained on RefinedWeb data), incorporating more sources, such as DC-3, typically leads to improved average performance. Upon analysis, we find that specific mixtures excel in particular evaluation benchmarks. For example, DC-1 obtains the highest accuracy in the arc chal- lenge and race. Meanwhile, DC-3 outperforms others in the wsc273, swag, and pawsx, and DC-5 emerges as the top performance in the xstory cloze evalu- ation. Moreover, all of our configurations are superior in the average perfor- mance over the comparisons of GPT-neo-1.3B [3] and RedPajama-1.3B [7].
2309.10818#22
2309.10818#24
2309.10818
[ "2302.13971" ]