id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2307.14430#9
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Deï¬ nition 2.1 (Skill). A skill s is a unit of behavior with associated data Xs â X such that if f is trained on an dataset Ds â Xs, then f has improved metric L on samples belonging to Xs\Ds on average. This deï¬ nition of a skill is ï¬ exibleâ it simply means that given a training dataset associated with the skill, a model f has an improved metric (e.g., decreasing validation loss) when evaluated on validation data associated with this skill. Under this deï¬ nition, a skill could be a granular task, such as Spanish question generation for a subset of Wikipedia articles, or can be deï¬ ned over a data source, such as next-token prediction of legal data from tax court rulings. However, our next deï¬ nition, the ordered skill set, has a more speciï¬ c construction and provides a framework for how models learn across dependent skills.
2307.14430#8
2307.14430#10
2307.14430
[ "2101.00027" ]
2307.14430#10
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Deï¬ nition 2.2 (Ordered skill set, skills graph). An ordered skill set for f is a collection of skills S = {s1, . . . , sk} over which there is a directed skills graph G = (S, E) on the skill set that is neither complete or empty, where (si, sj) â E if the amount of data needed to learn sj when uniformly sampling from Dsi â ª Dsj is no more than the amount of data needed when sampling only from Dsj . We equate learning a skill sj to f attaining a certain value of L or lower on average over Xsj \Dsj . This deï¬ nition isolates complete and empty graphs as extrema that do not capture meaningful sets of skills. We discuss the three types of skill graphsâ complete, empty, intermediateâ and their implications for data selection. In particular, we discuss how several initial attempts of deï¬ ning skills over datasets via semantic groupings resulted in the extrema cases (see Appendix C.2 for full results): â ¢ The complete graph demonstrates that all skills inï¬ uence each other. A random partition is an example of a skill set that yields a complete graph. This graph suggests that the best approach for learning any skill or set of skills is random sampling on the dataset. This is not a setting where we can gain much with skill-based sampling. For example, using instruction types as skills on the Alpaca dataset results in a nearly complete estimated skills graph (97.4% dense), and we
2307.14430#9
2307.14430#11
2307.14430
[ "2101.00027" ]
2307.14430#11
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
3 Model performance on LEGO skill 3 Model performance on Addition skill 1 Model performance on Spanish QG â Trained on skill 1 eâ Trained on Spanish QG â â Trained on skills 1, 2 e+ Trained on (Spanish, English] x [QA, QG] â â Trained on skill 3 08 â â Trained on skills 1, 2,3 Validation Loss PS Model performance on slance detection â <= Trained on stance detection â = â Trained on stance detection, text matching a @ 1000 2000 3000 4000 5000 6000 & 1000 2000 3000 4000 5000 6000 @ 100 200 300 400 500 600 Steps Steps Steps Validation Loss Validation Loss Validation Loss Figure 3: On the LEGO synthetic, 3-digit addition, and Natural Instructions, we identify examples of ordered skill sets in which training on a mixture of skills helps learn an individual skill faster than just training on that skill itself, given a ï¬ xed training budget.
2307.14430#10
2307.14430#12
2307.14430
[ "2101.00027" ]
2307.14430#12
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
ï¬ nd that stratiï¬ ed sampling on these skills only improves validation loss per skill by 0.007 points over random sampling on average (Figure 2 left), suggesting that utilizing skills does not improve model performance in this case. â ¢ The empty graph demonstrates that each skill is independent. This can occur if skills are too granular; for instance, learning Spanish math problems is unlikely to help with English poem generation. This graph suggests that the best approach for learning an individual skill is to train on the skill itself. We see that empty graphs exist in real data; in Figure 2 (center), using data sources as skills on the Pile of Law [21] results in a nearly empty skills graph (3.9% dense).
2307.14430#11
2307.14430#13
2307.14430
[ "2101.00027" ]
2307.14430#13
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
â ¢ Graphs that are neither empty nor complete thus suggest a nontrivial order of how skill inï¬ uence each other. This is the setting in which we expect that identifying skills and exploiting their ordering will help the most. In Figure 2 right, we use task categories, which capture broader reasoning patterns, as skills on Natural Instructions and ï¬ nd that the estimated graph has intermediate density (42.7% dense). We show concrete examples of how skills can be learned more efï¬ ciently on Natural Instructions in Section 2.2. While these intuitive groupings result in ordered skill sets on some datasets (e.g., task categories on NI), this is not always the case (e.g., instruction types on Alpaca and sources on Pile of Law). Even though these groupings capture some notion of diversity in the dataset, our ï¬ ndings suggest that not just any semantic grouping induces an ordered skill set. We now empirically demonstrate that our deï¬ nition of ordered skill sets aligns with how models learn and can be exploited for more data-efï¬ cient training. # 2.2 Examples of skills and ordered skill sets We provide examples of ordered skill sets on the LEGO synthetic dataset, an addition synthetic dataset, and subsets of the Natural Instructions dataset. On these datasets, we ï¬ nd that certain skills are better learned when trained along with their prerequisite skills rather than in isolation. LEGO skills The LEGO synthetic, ï¬ rst introduced in [72], can evaluate a modelâ s ability to follow a chain of reasoning. In this synthetic, the letters of the alphabet, A, are variables each with some binary label in {0, 1}. An individual sample consists of k clauses for some ï¬ xed k across the dataset, each of the form a = gx where a, x â A and g is either a negation (â notâ ) or assertion (â valâ ), e.g. we assign a to the value of x, or we assign a to the opposite label. At the end of the sentence, we prompt the model for what the value of one of these variables is. Two samples x â X are given below for k = 5: Input: b = not y, r = val 1, m = val b, q = val m, y = not r. Output: b = 1.
2307.14430#12
2307.14430#14
2307.14430
[ "2101.00027" ]
2307.14430#14
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Input: c = val x, p = val f, x = val k, f = not c, k = val 0. Output: k = 0. These samples each correspond to a chain of reasoning; for instance the ï¬ rst sample has the chain r, y, b, m, q, where knowing qâ s label requires the most reasoning steps. We deï¬ ne the ith skill si as the modelâ s ability to know the ith variable of the chain. From our example above, the ï¬ rst sample belongs to Xs3 and the second sample belongs to Xs1.
2307.14430#13
2307.14430#15
2307.14430
[ "2101.00027" ]
2307.14430#15
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
To demonstrate the existence of ordered skill sets, we continually pre-train the 125M parameter GPT-Neo model [5, 13] over various mixtures of LEGO skills with k = 5. In Figure 3 (left), we ï¬ nd that in 35.9% fewer training steps, training on a balanced mixture of Xs1 , Xs2, and Xs3 resulted in the same validation loss of 0.01 as training solely on Xs3. This suggests that s1, s2 helped unlock performance on s3 and that there exist edges from s1 or s2 to s3 in the skill graph. Additional observations are available in Appendix D.1, where we examine other edges as well as more complex reasoning chains, and the full skills graph corresponding to the ordered skill set for LEGO with k = 5 is in Figure 10. Addition skills We consider a variant of a synthetic 5-digit addition dataset analyzed in [44]. We show the existence of ordered skill sets for a simpliï¬ ed 3-digit addition dataset where we treat each digit prediction as a skillâ the outputs, in this case, are the integers {0, 1, ..., 9}. Examples are of the following form: 4 # Input: A = 1 0 6 + 0 7 1 , A 0 = ? Output: 7 Input: A = 6 0 6 + 8 7 9 , A 2 = ? Output: 4 where â A 0â refers to the ones digit of the output (s1) and â A 2â refers to the hundreds digit (s3). In Figure 3 (center), we ï¬
2307.14430#14
2307.14430#16
2307.14430
[ "2101.00027" ]
2307.14430#16
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
nd that in 32% fewer training steps, training on a balanced mixture of Xs1 , and Xs2 resulted in the same validation loss of 0.01 as training solely on Xs1 . That is, the ones digit addition skill can be improved by simultaneously learning the tens digit addition skill, even though the former should not require information from the latterâ this is in line with observations from prior work that models do not always learn the ones digit addition ï¬ rst [44]. The full skills graph corresponding to the ordered skill set over 3-digit addition is in Figure 11. Natural Instructions (NI) skills We show that ordered skill sets exist in NI [63] when we treat task categories as skills.
2307.14430#15
2307.14430#17
2307.14430
[ "2101.00027" ]
2307.14430#17
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
â ¢ In Figure 3 (top right), we show that ordered skill sets exist over crosslingual task categories. Training on Spanish question generation (QG) along with equal parts of English QG, Spanish question answering (QA), and English QA results in 4.1% lower validation loss than training only on Spanish QG. Remarkably, the former only uses 25% of the latterâ s Spanish QG data. This suggests that there are edges from Spanish QA, English QA, and English QG to Spanish QG. â ¢ In Figure 3 (bottom right), we see that training on the task category Text Matching along with Stance Detection helps decrease the loss on Stance Detection by 11%. This suggests that these categories, which both involve understanding the relationship between two input texts, share an edge. The full skills graphs corresponding to the ordered skill sets over these task categories are in Figure 13. While equating task categories to skills may be noisy, these examples suggest that there is signal within real data that suggests that ordered skill sets can improve data efï¬ ciency. 2.3 Skill recovery A ï¬ nal component of characterizing skills is unsupervised recovery of ordered skill sets. We consider embedding-based clustering approaches and a loss-based clustering approach for recovering LEGO skills. When clustering data using various trained and pre-trained embeddings, we ï¬ nd that they were unable to achieve above 39% accuracy on LEGO. Instead, we ï¬ nd that taking 10 random training runs and clustering data by their loss per timestep per run recovers the skills with 61% accuracy (Table 3). The intuition behind this method is that the validation losses on points from the same skill have similar trajectories as models learn. We discuss this approach more in Appendix D.2. 3 Skills-based data selection Now that we have established the existence of ordered skill sets, we discuss how to use them for data selection. We state the data selection problem for learning across skills in Section 3.1. We discuss how to learn the skills graph that will be exploited in our data selection methods in Section 3.2. We then introduce two sampling methods that utilize the graph, a simple skill-stratiï¬ ed sampling method and the online sampling method SKILL-IT, in Section 3.3. # 3.1 Problem statement
2307.14430#16
2307.14430#18
2307.14430
[ "2101.00027" ]
2307.14430#18
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
We are given an ordered training skill set Strain = {strain,1, . . . , strain,k} on the training data, each with associated support set Xstrain,1, . . . Xstrain,k , and an ordered evaluation skill set Seval = {seval,1, . . . , seval,m} of m evaluation skills on a separate evaluation dataset. We aim to select n samples from Strain via a mixture of training skills, p â â kâ 1, to achieve three goals depending on how Seval is constructed: â ¢ Continual pre-training: when Seval = Strain, our goal is select a mixture of training skills to learn all of them.
2307.14430#17
2307.14430#19
2307.14430
[ "2101.00027" ]
2307.14430#19
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
â ¢ Fine-tuning: when Seval â Strain, our goal is to select a mixture of training skills to learn an individual target skill or subset of these skills. â ¢ Out-of-domain: when Seval â © Strain = â , our goal is to select a mixture of training skills to learn a disjoint set of evaluation skills we cannot train on. This can arise when we have a separate downstream validation dataset or the skills identiï¬ ed in the training dataset are noisy. Furthermore, we have a skills graph G = (Strain â ª Seval, E), where E â Strain à Seval and A â Rkà m is a weighted adjacency submatrix, where Aij describes the strength of the edge from strain,i to seval,j. In Table 1, we summarize how the three different settings are constructed and how A varies across them. Next, we discuss how A can be estimated from the data. # 3.2 Skills graph learning The skills graph is important for determining how to sample from the ordered skill set for training efï¬
2307.14430#18
2307.14430#20
2307.14430
[ "2101.00027" ]
2307.14430#20
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
ciently. We present two approaches for learning the skills graphâ brute-force and linear approximation. Algorithms are provided in Appendix B.2. By deï¬ nition 2.2, the brute-force way of identifying edges involves ï¬ xing an overall training budget of H steps and 1) 5 Setting Seval Skills graph Continual pre-training Fine-tuning Out-of-domain Seval = Strain Seval â Strain A â Rkà k, edges among all Strain A â Rkà m, edges from all training skills to target skill subset Seval â © Strain = â A â RkÃ
2307.14430#19
2307.14430#21
2307.14430
[ "2101.00027" ]
2307.14430#21
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
m, edges from all training skills to separate evaluation skill set Table 1: Summary of three settingsâ continual pre-training, ï¬ ne-tuning, and out-of-domain. These settings are determined by how Seval is deï¬ ned and result in different skills graphs used for our sampling methods. Algorithm 1: SKILL-IT Online Data Selection Algorithm 1: Input: Ordered training skill set Strain, ordered evaluation skill set Seval. Learning rate η, T rounds, n samples, H training steps per run for graph learning, model f1, window parameter w. 2: A â LEARNGRAPH(Strain, Seval, H, f1) (Alg. 2, 3). 3: Initialize pi 4: for t = 1, . . . , T â 1 do Observe losses Leval,j(ft) for all seval,j â Seval. 5: Train model ft with n/T samples from mixture pt over Strain. Update model ft+1 = Φ(ft, pt). 6: Set pi 7: 8: end for = exp( 05", # j=1 Aij) for all i â [k], the softmax over A. # piz, = exp(7 training and evaluating the model on each si and 2) training the model on each pair of (si, sj) and evaluating on si and sj. If the loss on sj when trained on both si and sj is lower, there exists an edge from si to sj. This approach has runtime O(Hk2), which is feasible for small k. When k is large, we can approximate this approach in linear time by training on each si for h < H steps and setting Aij > 0 if the loss on sj decreases over h steps for a runtime of O(hk). This linear approach is necessary in the out-of-domain setting when Seval and Strain are disjoint, as we do not train on data associated with Seval. In addition, both graph learning approaches can be performed on a smaller model, and the learned graph can be used for data selection for training a larger model (Appendix D.4). # 3.3 Skills graph-aware sampling We present two approaches for sampling over the mixture of training skills according to the skills graph: skill-stratiï¬
2307.14430#20
2307.14430#22
2307.14430
[ "2101.00027" ]
2307.14430#22
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
ed sam- pling, which samples uniformly over relevant training skills according to A, and SKILL-IT, which is an online generalization that incorporates knowledge of how skills are being learned throughout training. # 3.3.1 Skill-stratiï¬ ed sampling A straightforward sampling approach is to discard training skills that do not beneï¬ t the evaluation skills and sample uniformly over the set of relevant training skills, which we call skill-stratiï¬ ed sampling. For continual pre-training, the relevant skills are the entire training skill set; for each strain,i â Strain, Pr(strain,i) = 1 k . This enables each skill to have sufï¬ cient training data. For ï¬ ne-tuning, the relevant skills are the target skills and prerequisite skills, which can be identiï¬ ed via positive entries of the ith column of A with Sprereq = {strain,i : â seval,j s.t. Aij > 0}. We then set Pr(s) = |Sprereqâ ªSeval| for s â Sprereq â ª Seval. For the out-of-domain setting, skill-stratiï¬ ed sampling is over the set of prerequisite skills. For each s â Sprereq, we set Pr(s) = 1 # 3.3.2 SKILL-IT online data selection algorithm Despite accounting for prerequisite skills, one shortcoming of skill-stratiï¬ ed sampling is that even if a skill has already obtained sufï¬ ciently low validation loss early during training, we will continue to allocate the same weight to that skill throughout training. Therefore, we formulate our data selection problem as an online learning problem and propose SKILL-IT, which both prioritizes prerequisite skills and skills that are not yet learned. We are given a budget of T rounds and n total samples to train on. At round t, we select a mixture pt â
2307.14430#21
2307.14430#23
2307.14430
[ "2101.00027" ]
2307.14430#23
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
â kâ 1 from the k-dimensional unit simplex, and for each training skill strain,i â Strain, we sample from Xstrain,i with proportion pi t for a total of n T samples per round. Let ft be the model at at the start of round t. We can deï¬ ne ft recursively as a function of the previous roundâ s model ftâ 1 and mixture ptâ 1 via a dynamics function Φ : F à â kâ 1 â F; that is, ft = Φ(ftâ 1, ptâ 1).
2307.14430#22
2307.14430#24
2307.14430
[ "2101.00027" ]
2307.14430#24
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Let Leval,j(ft) be the validation loss of ft on seval,j. Our goal is to select p1, . . . , pT to minimize loss per evaluation skill at 6 the end of training: minimize _ nd Levai,j (fr) () This optimization problem is challenging to solve without additional assumptions. In order to make the problem tractable, we impose an explicit dynamics rule for the each evaluation skillâ s loss Leval,j in terms of the current loss and data mixture. Assuming for simplicity that Seval â Strain, a simple rule would be Leval,j(ft) = Leval,j(Φ(ftâ 1, ptâ 1)) := Leval,j(ftâ 1)(1 â αpj tâ 1) for α â [0, 1]. That is, we expect that allocating more data to skill j should result in the validation loss on skill j decreasing. However, such an expression assumes that only training on the jth skill will help learn the jth skill. Instead, Section 2.2 suggests that there are other skills that may help with the jth skill. We propose the following dynamics: Levat,j (ft) = Levat,j(frâ 1)(1 â Al jpe-1), (2) where A:,j is the column with weights of all skills that inï¬ uence seval,j, and we absorb the scalar α into A. The optimization problem in (1) can thus be simpliï¬ ed as follows: minimize no Leva,j(fr) GB) = O(fe-1.pea) Vt=1,...T bastth = Levai,j(ft-1)(1 â Al pt-s) Vj â ¬ [m] In Appendix B, we derive the following update rule via online mirror descent [45] for learning rate η > 0: Piya = PLexP (0 »~ Aijlous(H) : (4) j=l # ea Aijoa,j(fr))- . Since this summation over Ï results in diminishing strength of updates, we change it to a moving window of size w. Our full method is in Algorithm 1. Intuitively, at each step we adjust the weight on skill i based on the losses of skills that i inï¬ uences, with the assumption that more training data helps decrease loss.
2307.14430#23
2307.14430#25
2307.14430
[ "2101.00027" ]
2307.14430#25
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Note that when we use our algorithm with a complete graph or empty graph, we achieve expected behavior discussed in Section 2.1. For the complete graph, our algorithm reduces to stratiï¬ ed sampling. When we have a skill set with an empty graph, the update rule reduces to sampling proportional to each skillâ s validation loss. # 4 Experimental results Given an ordered skill set, we aim to validate SKILL-ITâ s ability to select data for efï¬ ciently learning skills in the continual pre-training, ï¬ ne-tuning, and out-of-domain settings. We provide full tables of results in Appendix D.3.1 and results where we learn the skills graph on the 125M model and use it for the 1.3B parameter model in Appendix D.4. Skills graphs are in Appendix C.2, weight trajectories for SKILL-IT are in Appendix D.3.2, and ablations on the graph and online components of SKILL-IT are in Appendix D.5. # 4.1 Continual pre-training Setup We evaluate the ability of SKILL-IT to select data for efï¬ ciently learning over all skills. We measure average validation loss per skill after a ï¬ xed number of training steps. We construct the LEGO synthetic and addition synthetic with k = 5 and 3, respectively, and an imbalanced dataset over the skills. On the Natural Instructions dataset, we use 23 of the task categories as skills. Baselines We compare SKILL-IT against three baselines that do not account for skills: random sampling, curriculum learning, and anticurriculum learning. Random sampling is a standard procedure for selecting samples given no additional information. Curriculum learning [3] and anticurriculum learning [67] score the samples from easiest to hardest and vice versa, respectively, and sample over an expanding set of the lowest scored samples at every epoch; we use the pre-trained
2307.14430#24
2307.14430#26
2307.14430
[ "2101.00027" ]
2307.14430#26
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
7 LEGO Skill 1 10° LEGO Skill 2 LEGO Skill 3 Addition Skill 1 Addition Sidll 2 5 aw 10° 3 g g 107 Fs 4 Bio g 107? g 107 z 102 a 3 Baye 3 10 10% o 2000 4000 +6000 «6 2000 4000 6000 6 2000 4000 6000 o m0 4000000 700040006000 LEGO Skill 4 LEGO Skill 5 Average per skill Addition Skill 3 Average per skill 3 10 10° 3 3 if 10 10" Random ios tos g â Curriculum 3 Random 2 â
2307.14430#25
2307.14430#27
2307.14430
[ "2101.00027" ]
2307.14430#27
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Anticurrcutum i Curicutm s â Skill-stratified * oe â â 10-* 10-2 Salitt 102} â simte 6 2000 +4000 +6000 «6 2000 4000 6000 6 2000 4000 6000 7 70004000000 7000 4000 6000 Steps Steps Steps steps Steps Figure 4: Performance of SKILL-IT on each skill in the continual pre-training setting (learning over all skills in the ordered training skill set) on the LEGO synthetic (left) and addition synthetic (right). modelâ s loss to rank points. We evaluate skill-stratiï¬ ed sampling, which uses knowledge of the skills but is not online, and include an additional skills curriculum baseline in Appendix D.3.1 Analysis Our results are shown in Figure 4. Across our experiments we ï¬ nd that SKILL-IT outperforms baselines that do not use skills as well as skill-stratiï¬ ed sampling. On the LEGO dataset, all three baselines that do not utilize a notion of skills exhibit plateauing loss on four of the skills. Both skill-stratiï¬ ed sampling and SKILL-IT are able to signiï¬ cantly reduce loss on all skills, but the former is slower. Halfway through training, SKILL-IT exhibits an accuracy improvement between 9.9 and 25.9 points over other approaches, reaching a ï¬ nal accuracy of 99.4 (Figure 19). SKILL-IT outperforms skill-stratiï¬ ed sampling by initially allocating more weight to prerequisite skills and eventually allocating more weight to skills that are learned more slowly (Figure 20). On the addition synthetic with k = 3, SKILL-IT converges to near-zero validation loss faster than the baselines on skills 1 and 2. While the random baseline may seem competitive at ï¬ rst glance, it fails to learn skill 1 (adding together the ones digits), which hurts its average loss per skill. On NI, the validation loss from SKILL-IT is 3.2% lower than from random sampling (Table 7). Our results suggest that exploiting the construction and ordering of skills is critical to learning skills quickly. # 4.2 Fine-tuning
2307.14430#26
2307.14430#28
2307.14430
[ "2101.00027" ]
2307.14430#28
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Performance on Skill 3 Performance on Skill 1 2g erformance on Spanish QG Performance on stance detection â Skill 3 only 29 â Skill 1 only â eâ Spanish QG only â e= Stance detection only 2 ; A 2 o 8 0.75 â ~ Skil-stratified | § â Skill stratified aos â *â Skill-stratified g s+ Skill-stratified â Skillt : 2. â â Skit Sis Ne sat & 050 § Skill § g g B= 2 g ic} gl 3 ic ES z z 3 5 0.25 FS S24 gia s g s s 0.00 0 23 12 0 2000 4000 6000 0 2000 4000 6000 200 400 600 â 200 400 600 Steps Steps Steps Steps Figure 5: Performance of SKILL-IT in the ï¬ ne-tuning setting (learning a target skill using the ordered training skill set) on LEGO, addition, and NI. Setup We evaluate the ability of SKILL-IT to select data from an ordered training skill set for learning a target skill. Mirroring Figure 3, we evaluate on LEGO target skill 3 (third in reasoning chain), on the addition syntheticâ s skill 1 (ones place digit addition), and on NIâ s Spanish QG and Stance Detection. Baselines We compare SKILL-IT against training on the target skill only and skill-stratiï¬ ed sampling over prerequisite skills and the target skill. The skill-stratiï¬ ed sampling approach uses the ordered skill set to identify prerequisite skills, but does not exploit them dynamically. Analysis Our results are shown in Figure 5. On LEGO, SKILL-IT results in the same validation loss of 0.01 as training only on the target skill in 38.1% fewer steps. We observe a similar trend on addition, with SKILL-IT converging to a validation loss of 0.01 in 59% fewer steps required to do so when training only on the target skill. Finally, on NI, SKILL-IT improves validation loss on Spanish question generation by 5.3% and Stance Detection by 13.6% over just training on the respective
2307.14430#27
2307.14430#29
2307.14430
[ "2101.00027" ]
2307.14430#29
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
8 Answerability Classification Cause Effect Classification Coreference Resolution Data To Text â â Random â s- Skill stratified = skill el Validation Loss Ss oS ko Dialogue Act Recognition Grammar Error Correction Keyword Tagging Overlap Extraction 2.38 2 278 2.36 2.42 2.34 Validation Loss 5 g Bos za 38 8 2.32 Question Rewriting Textual Entailment Title Generation Word Analogy. 2.62 2.60 Validation Loss 668 @ 8 8 8 & 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 Steps Steps Steps Steps Figure 6: Performance of SKILL-IT in the out-of-domain setting for the NI test task split. SKILL-IT uses the graph between the train and evaluation skills to produce an online mixture on the training dataset. RedPajama source SKILL-IT mixture ArXiv Books C4 CommonCrawl GitHub StackExchange Wikipedia 0.1370 0.0437 0.4195 0.0732 0.189 0.0892 0.0484 Figure 7: Left:
2307.14430#28
2307.14430#30
2307.14430
[ "2101.00027" ]
2307.14430#30
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Accuracy on LM Evaluation Harness for continual pre-training of a 3B parameter model using SKILL-IT on the RedPajama dataset. We achieve higher accuracy at 1B additional tokens than uniform at 3B tokens. Right: SKILL-IT mixture over RedPajama sources. target skill only. In this setting, a signiï¬ cant portion of the improvement over training only on the target skill comes from identiï¬ cation of prerequisite skills through the learned graph in the skill-stratiï¬ ed sampling method. SKILL-IT is further able to improve performance with ï¬ ner-grained dynamic weighting on prerequisite skills. # 4.3 Out-of-domain setting Natural Instructions We evaluate the ability of SKILL-IT to select data from a set of training skills for learning a disjoint set of evaluation skills that we cannot train on. We use all 59 task categories in the NI train tasks split as the training skills and the 12 task categories in the test tasks split as our evaluation skills. We compare SKILL-IT against random and skill-stratiï¬ ed sampling, both of which do not exploit the relationships between training skills and evaluation skills. SKILL-IT achieves the lowest loss on 11 out of 12 task categories over random and skill-stratiï¬ ed sampling (Figure 6, tables in Appendix). RedPajama We use SKILL-IT to produce a data mixture on the RedPajama dataset. The training skills are the data sources comprising the dataset, and the evaluation skills are several tasks from the Language Model Evaluation Harness [14]. SKILL-IT with T = 1 (i.e. a static, graph-based mixture) yields the mixture in Figure 7 (right). We continually pre-train a 3B parameter model trained on one trillion tokens for three billion additional tokens using this mixture, and see that it outperforms uniform sampling over the data sources (Figure 7 left). In particular, SKILL-IT achieves higher accuracy with 1B additional tokens than uniform with 3B additional tokens. 5 Related work Data selection for LMs There have been several studies of large-scale data selection for LMs.
2307.14430#29
2307.14430#31
2307.14430
[ "2101.00027" ]
2307.14430#31
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Data deduplication [1, 22, 32], in which identical or nearly identical samples are removed, is a method that enables LMs to be trained on smaller, cleaned datasets and has been increasingly used as a pre-processing step for training data [4, 59, 71]. Other methods applied at scale involve ensuring high quality of data by explicitly ï¬ ltering out samples or comparing the training dataset with a cleaned reference dataset [7, 31, 59]. Importance reweighting approaches have also been proposed for identifying training data from a large corpus that best approximates a smaller target distribution [69], and inï¬ uence functions have been used to select a subset of training data to improve performance on downstream tasks [61]. These approaches can identify data
2307.14430#30
2307.14430#32
2307.14430
[ "2101.00027" ]
2307.14430#32
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
9 pertaining to a particular target distribution or ï¬ lter out low quality data according to some heuristic, while our work aims to understand how the choice of data is related to the numerous skills that LMs learn. Recent development of LMs has shifted focus from emphasizing the scale of the model to prioritizing the training data utilized. For example, models like Alpaca [56], Vicuna [9], and Koala [15] are all based on the LLaMA model combined with instruction data generated by an existing LM.
2307.14430#31
2307.14430#33
2307.14430
[ "2101.00027" ]
2307.14430#33
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Palm 2â s technical report states that the data mixture was a critical component of the ï¬ nal model [17], and Mosaic MLâ s recent MPT model was trained on a hand-engineered mixture of the RedPajama dataset [42]. However, these works lack rigorous explanation for why their training datasets were constructed in this way. Finally, perhaps most related to our approach is the contemporary work DoReMi [68], which uses group distributionally robust optimization on a smaller LM to select data source mixtures for training a larger LM. Their approach focuses on selecting data at the data source level for optimizing worst-case performance across the training data sources, rather than at the more general skills level for a variety of target skill sets. Furthermore, we focus on understanding how skills are related to each other and induce some order in how LMs learn by explicitly modeling skill graph structure, which we ï¬ nd to be important for data-efï¬ cient LM training (see ablations in Appendix D.5). Data selection methods Many data selection methods have been proposed for supervised, task-speciï¬ c settings. In this setting, the most typical objective is dataset condensation, which aims to identify a small subset of data that captures the larger datasetâ s properties with respect to the model. Some approaches include constructing coresets [30, 47], identifying samples that the model forgets during training [58]; identifying samples with the largest gradients [46] or gradients that approximate the overall gradient [39]; clustering in embedding space and selecting points farthest from cluster centers [53]; and selecting samples with the highest uncertainty or entropy [33]. These approaches have also been shown to transfer from smaller models to larger models [10]. Unlike these methods, we study how to select data for learning one or many skills at the mixture level for LMs instead of the instance level. Another area of interest is data selection for domain adaptation and multitask learning. For domain adaptation, there are a wide range of methods that select data to best match the target distribution. For example, the Moore-Lewis method matches data based on the difference in cross-entropy using a model trained on the target versus a model trained on the source data [41]. Several other approaches suggest training a model to distinguish between source and target and selecting points with high uncertainty [50], or selecting points based on some divergence in an embedding space [51].
2307.14430#32
2307.14430#34
2307.14430
[ "2101.00027" ]
2307.14430#34
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
In comparison to these approaches, our work focuses on learning one or many skills and also ï¬ nds that embedding-based heuristics do not fully identify skills. Data attribution Another perspective on understanding training data is data attribution, which seeks to identify what data is responsible for particular model behaviors. Inï¬ uence functions [28] and shapley values [16] are two ways to quantify the role of individual samples. Datamodels [23] ï¬ t a model to predict behavior given a subset of training data, providing a framework for understanding individual samples as well as dataset counterfactuals. Simï¬ uence [20] ï¬ ts a Markov process to a set of training trajectories for ï¬ ner-grained understanding of how data impacts training. We focus on understanding how groups of data associated with skills elicit broader model capabilities, and utilize this understanding to select data for more efï¬ cient training. Curriculum learning Curriculum learning [3] proposes to show the model data in order from easy samples to hard ones. Various criteria have been used to determine hardness, and anticurriculum as well as various pacing functions and mixing rates have been explored [54]. Curriculum learning can also be performed at the group level [60]. More sophisticated approaches include parametrizing each sample with a dynamic importance [52], and also accounting for irrelevant and noisy data [38]. Our approach similarly utilizes a curriculum, but it is deï¬ ned over a skills graph and does not necessarily align with training on easiest to hardest skills. How LMs learn Many different explanations for how LMs learn from data have been proposed. One hypothesis is that there exist discrete, universal building blocks of LM knowledge called quanta, and power law scaling emerges from a learning over a particular distribution of quanta in the right order [37]. Another is that chain of thought reasoning emerges due to local clusters of latent variables that inï¬ uence each other, which can be validated by studying the LMâ s ability to do conditional inference given intermediate variables [48]. Others have provided theoretical analysis of how transformers learn topics by studying co-occurrences of words in the training data [34]. Empirically, how models learn is still a mysteryâ for instance, models trained on code are found to perform fairly well at commensense reasoning [36]. Our work initiates a study on how LMs learn various skills and how to exploit this for better data selection.
2307.14430#33
2307.14430#35
2307.14430
[ "2101.00027" ]
2307.14430#35
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
10 Task selection In multitask auxiliary learning, the goal is to train a model to perform well on a target task(s) by selecting the most beneï¬ cial source tasks to train on. One can use feature similarity to select tasks [29], but we ï¬ nd in our synthetics that feature similarity does not always recover skills. In Taskonomy [70], a hypergraph over a set of tasks is learned and used to select tasks. The methods used to develop the taxonomy can be applied to further expand our graph learning (e.g., studying transitive and higher-order properties). However, their focus is on task selection in computer vision rather than data selection for LMs to learn skills. Lastly, the contemporary work of TaskWeb [24] builds a graph among 22 common NLP tasks in order to determine what the best source tasks are for a target task.
2307.14430#34
2307.14430#36
2307.14430
[ "2101.00027" ]
2307.14430#36
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Their deï¬ nition of an edge in the task graph is less strict than ours (their comparison is on if training on additional data from si helps with sj, while we ï¬ x the overall amount of training data over both si and sj). Overall, our approach is similar in use of the skills graph, but we incorporate it into a dynamic sampling algorithm. Furthermore, we look more broadly at skills, rather than tasks, and characterize when we expect using the skills graph to improve model performance. Education The notion of skill has been studied in education. Classical research on learning hierarchies [66] identify sets of skills that make up subordinate capabilities for students. For instance, [12] identiï¬ ed that in order for students to solve linear equations, there were many prerequisite skills, ranging from the simplest being symbol recognition to the most complex being the ability to add, subtract, multiple, and divide from both sides of the equation. More recently, decision-making over lesson sequences based on skills, e.g., what the student already knows versus what the lesson teaches, has become an area of interest in personalized learning [49].
2307.14430#35
2307.14430#37
2307.14430
[ "2101.00027" ]
2307.14430#37
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
# 6 Conclusion Given a ï¬ xed budget of data, knowing what data to train on to induce various capabilities in an LM is challenging. As LMs continue to improve, it will become increasingly important to extract as much signal as possible from the data and to direct that signal towards acquiring a broad variety of capabilities. In this paper, we introduce a skills-based framework for understanding how LMs learn and for selecting training data. We hope our study invites others to build on such a notion of skill and further explore how to align skills with data. # Acknowledgements We thank Together Computer (https://together.xyz/) for providing portions of the compute used to train models in this paper. We thank Sabri Eyuboglu, Karan Goel, Arjun Desai, Neel Guha, Michael Zhang, Vishnu Sarrukai, Simran Arora, Ben Spector, Brandon Yang, Gautam Machiraju, and Sang Michael Xie for their helpful feedback and discussion. We gratefully acknowledge the support of NIH under No. U54EB020405 (Mobilize), NSF under Nos. CCF1763315 (Beyond Sparsity), CCF1563078 (Volume to Velocity), and 1937301 (RTML); US DEVCOM ARL under No. W911NF-21- 2-0251 (Interactive Human-AI Teaming); ONR under No. N000141712266 (Unifying Weak Supervision); ONR N00014-20- 1-2480: Understanding and Applying Non-Euclidean Geometry in Machine Learning; N000142012275 (NEPTUNE); NXP, Xilinx, LETI-CEA, Intel, IBM, Microsoft, NEC, Toshiba, TSMC, ARM, Hitachi, BASF, Accenture, Ericsson, Qualcomm, Analog Devices, Google Cloud, Salesforce, Total, the HAI-GCP Cloud Credits for Research program, the Stanford Data Science Initiative (SDSI), and members of the Stanford DAWN project: Facebook, Google, and VMWare. FS is supported by NSF CCF2106707 and the Wisconsin Alumni Research Foundation (WARF). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.
2307.14430#36
2307.14430#38
2307.14430
[ "2101.00027" ]
2307.14430#38
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Any opinions, ï¬ ndings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reï¬ ect the views, policies, or endorsements, either expressed or implied, of NIH, ONR, or the U.S. Government. 11 References [1] Amro Abbas, Kushal Tirumala, Dániel Simig, Surya Ganguli, and Ari S. Morcos. Semdedup: Data-efï¬ cient learning at web-scale through semantic deduplication, 2023. [2] Yuntao Bai, Andy Jones, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback, 2022. [3] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41â 48, 2009.
2307.14430#37
2307.14430#39
2307.14430
[ "2101.00027" ]
2307.14430#39
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
[4] Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle Oâ Brien, Eric Hallahan, Mohammad Aï¬ ah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. Pythia: A suite for analyzing large language models across training and scaling, 2023. [5] Sid Black, Gao Leo, Phil Wang, Connor Leahy, and Stella Biderman. GPT-Neo:
2307.14430#38
2307.14430#40
2307.14430
[ "2101.00027" ]
2307.14430#40
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Large Scale Autoregressive Language Modeling with Mesh-Tensorï¬ ow, March 2021. If you use this software, please cite it using these metadata. [6] Rishi Bommasani, Percy Liang, et al. On the opportunities and risks of foundation models, 2021. [7] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakan- tan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.
2307.14430#39
2307.14430#41
2307.14430
[ "2101.00027" ]
2307.14430#41
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. [8] Mark Chen, Jerry Tworek, et al. Evaluating large language models trained on code, 2021. [9] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing.
2307.14430#40
2307.14430#42
2307.14430
[ "2101.00027" ]
2307.14430#42
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. [10] Cody Coleman, Christopher Yeh, Stephen Mussmann, Baharan Mirzasoleiman, Peter Bailis, Percy Liang, Jure Leskovec, and Matei Zaharia. Selection via proxy: Efï¬ cient data selection for deep learning, 2019. [11] Robert M Gagne. The acquisition of knowledge. Psychological review, 69(4):355, 1962. [12] Robert M Gagne and Noel E Paradise. Abilities and learning sets in knowledge acquisition. Psychological Monographs: General and Applied, 75(14):1, 1961.
2307.14430#41
2307.14430#43
2307.14430
[ "2101.00027" ]
2307.14430#43
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
[13] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. [14] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPoï¬ , Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou.
2307.14430#42
2307.14430#44
2307.14430
[ "2101.00027" ]
2307.14430#44
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
A framework for few-shot language model evaluation, September 2021. [15] Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wallace, Pieter Abbeel, Sergey Levine, and Dawn Song. Koala: A dialogue model for academic research. Blog post, April 2023. [16] Amirata Ghorbani and James Zou. Data shapley: Equitable valuation of data for machine learning. In International Conference on Machine Learning, pages 2242â 2251. PMLR, 2019. [17] Google. Palm2 technical report. Technical report, 2023. [18] Anupam Gupta. Advanced algorithms: Notes for cmu 15-850 (fall 2020), 2020. [19] Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A.
2307.14430#43
2307.14430#45
2307.14430
[ "2101.00027" ]
2307.14430#45
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Smith. Donâ t stop pretraining: Adapt language models to domains and tasks. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020. [20] Kelvin Guu, Albert Webson, Ellie Pavlick, Lucas Dixon, Ian Tenney, and Tolga Bolukbasi. Simï¬ uence: Modeling the inï¬ uence of individual training examples by simulating training runs, 2023. [21] Peter Henderson*, Mark S. Krass*, Lucia Zheng, Neel Guha, Christopher D. Manning, Dan Jurafsky, and Daniel E.
2307.14430#44
2307.14430#46
2307.14430
[ "2101.00027" ]
2307.14430#46
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Ho. Pile of law: Learning responsible data ï¬ ltering from the law and a 256gb open-source legal dataset, 2022. 12 [22] Danny Hernandez, Tom Brown, Tom Conerly, Nova DasSarma, Dawn Drain, Sheer El-Showk, Nelson Elhage, Zac Hatï¬ eld-Dodds, Tom Henighan, Tristan Hume, Scott Johnston, Ben Mann, Chris Olah, Catherine Olsson, Dario Amodei, Nicholas Joseph, Jared Kaplan, and Sam McCandlish.
2307.14430#45
2307.14430#47
2307.14430
[ "2101.00027" ]
2307.14430#47
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Scaling laws and interpretability of learning from repeated data, 2022. [23] Andrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, and Aleksander Madry. Datamodels: Predicting predictions from training data, 2022. [24] Joongwon Kim, Akari Asai, Gabriel Ilharco, and Hannaneh Hajishirzi. Taskweb: Selecting better source tasks for multi-task nlp, 2023.
2307.14430#46
2307.14430#48
2307.14430
[ "2101.00027" ]
2307.14430#48
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
[25] Hannah Rose Kirk, Yennie Jun, Filippo Volpin, Haider Iqbal, Elias Benussi, Frederic Dreyer, Aleksandar Shtedritski, and Yuki Asano. Bias out-of-the-box: An empirical analysis of intersectional occupational biases in popular generative language models. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 2611â 2624. Curran Associates, Inc., 2021. [26] Nikita Kitaev, Steven Cao, and Dan Klein.
2307.14430#47
2307.14430#49
2307.14430
[ "2101.00027" ]
2307.14430#49
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Multilingual constituency parsing with self-attention and pre-training. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3499â 3505, Florence, Italy, July 2019. Association for Computational Linguistics. [27] Nikita Kitaev and Dan Klein. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676â 2686, Melbourne, Australia, July 2018. Association for Computational Linguistics. [28] Pang Wei Koh and Percy Liang. Understanding black-box predictions via inï¬ uence functions, 2017. [29] Po-Nien Kung, Sheng-Siang Yin, Yi-Cheng Chen, Tse-Hsuan Yang, and Yun-Nung Chen. Efï¬ cient multi-task auxiliary learning: Selecting auxiliary data by feature similarity. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 416â
2307.14430#48
2307.14430#50
2307.14430
[ "2101.00027" ]
2307.14430#50
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
428, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. [30] Michael Langberg and Leonard J. Schulman. Universal approximators for integrals, pages 598â 607. [31] Hugo Laurençon, Lucile Saulnier, et al. The bigscience roots corpus: A 1.6tb composite multilingual dataset, 2023. [32] Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. Deduplicating training data makes language models better. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022. [33] David D Lewis.
2307.14430#49
2307.14430#51
2307.14430
[ "2101.00027" ]
2307.14430#51
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
A sequential algorithm for training text classiï¬ ers: Corrigendum and additional data. In Acm Sigir Forum, volume 29, pages 13â 19. ACM New York, NY, USA, 1995. [34] Yuchen Li, Yuanzhi Li, and Andrej Risteski. How do transformers learn topic structure: Towards a mechanistic understanding, 2023. [35] Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov. Towards understanding and mitigating social biases in language models. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 6565â
2307.14430#50
2307.14430#52
2307.14430
[ "2101.00027" ]
2307.14430#52
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
6576. PMLR, 18â 24 Jul 2021. [36] Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. Language models of code are few-shot commonsense learners, 2022. [37] Eric J. Michaud, Ziming Liu, Uzay Girit, and Max Tegmark. The quantization model of neural scaling, 2023. [38] Sören Mindermann, Muhammed Razzak, Winnie Xu, Andreas Kirsch, Mrinank Sharma, Adrien Morisot, Aidan N. Gomez, Sebastian Farquhar, Jan Brauner, and Yarin Gal.
2307.14430#51
2307.14430#53
2307.14430
[ "2101.00027" ]
2307.14430#53
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Prioritized training on points that are learnable, worth learning, and not yet learned (workshop version), 2021. [39] Baharan Mirzasoleiman, Jeff Bilmes, and Jure Leskovec. Coresets for data-efï¬ cient training of machine learning models, 2019. [40] Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. Cross-task generalization via natural language crowdsourcing instructions.
2307.14430#52
2307.14430#54
2307.14430
[ "2101.00027" ]
2307.14430#54
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
In ACL, 2022. 13 [41] Robert C. Moore and William Lewis. Intelligent selection of language model training data. In Proceedings of the ACL 2010 Conference Short Papers, pages 220â 224, Uppsala, Sweden, July 2010. Association for Computational Linguistics. [42] MosaicML. Introducing mpt-7b: A new standard for open-source, commercially usable llms, 2023. [43] Moin Nadeem, Anna Bethke, and Siva Reddy. Stereoset: Measuring stereotypical bias in pretrained language models. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021. [44] Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, and Jacob Steinhardt. Progress measures for grokking via mechanistic interpretability. In The Eleventh International Conference on Learning Representations, 2023.
2307.14430#53
2307.14430#55
2307.14430
[ "2101.00027" ]
2307.14430#55
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
[45] Arkadij SemenoviË c Nemirovskij and David Borisovich Yudin. Problem complexity and method efï¬ ciency in optimiza- tion. 1983. [46] Mansheej Paul, Surya Ganguli, and Gintare Karolina Dziugaite. Deep learning on a data diet: Finding important examples early in training, 2021. [47] Jeff M. Phillips. Coresets and sketches, 2016. [48] Ben Prystawski and Noah D. Goodman.
2307.14430#54
2307.14430#56
2307.14430
[ "2101.00027" ]
2307.14430#56
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Why think step-by-step? reasoning emerges from the locality of experience, 2023. [49] Siddharth Reddy, Igor Labutov, and Thorsten Joachims. Latent skill embedding for personalized lesson sequence recommendation, 2016. [50] Sebastian Ruder, Parsa Ghaffari, and John G. Breslin. Data selection strategies for multi-domain sentiment analysis, 2017. [51] Sebastian Ruder and Barbara Plank. Learning to select data for transfer learning with bayesian optimization. Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017. [52] Shreyas Saxena, Oncel Tuzel, and Dennis DeCoste. Data parameters: A new family of parameters for learning a differentiable curriculum.
2307.14430#55
2307.14430#57
2307.14430
[ "2101.00027" ]
2307.14430#57
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. [53] Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari S. Morcos.
2307.14430#56
2307.14430#58
2307.14430
[ "2101.00027" ]
2307.14430#58
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Beyond neural scaling laws: beating power law scaling via data pruning, 2022. [54] Petru Soviany, Radu Tudor Ionescu, Paolo Rota, and Nicu Sebe. Curriculum learning: A survey. International Journal of Computer Vision, 130(6):1526â 1565, Apr 2022. [55] Claire Stevenson, Iris Smal, Matthijs Baas, Raoul Grasman, and Han van der Maas. Putting gpt-3â s creativity to the (alternative uses) test, 2022. [56] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto.
2307.14430#57
2307.14430#59
2307.14430
[ "2101.00027" ]
2307.14430#59
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/ stanford_alpaca, 2023. [57] Together. Redpajama-data: An open source recipe to reproduce llama training dataset, 2023. [58] Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J. Gordon. An empirical study of example forgetting during deep neural network learning, 2018.
2307.14430#58
2307.14430#60
2307.14430
[ "2101.00027" ]
2307.14430#60
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
[59] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efï¬ cient foundation language models, 2023. [60] Neeraj Varshney, Swaroop Mishra, and Chitta Baral. Let the model decide its curriculum for multitask learning, 2022. [61] Xiao Wang, Weikang Zhou, Qi Zhang, Jie Zhou, Songyang Gao, Junzhe Wang, Menghan Zhang, Xiang Gao, Yunwen Chen, and Tao Gui. Farewell to aimless large-scale pretraining:
2307.14430#59
2307.14430#61
2307.14430
[ "2101.00027" ]
2307.14430#61
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Inï¬ uential subset selection for language model, 2023. [62] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions, 2022. 14 [63] Yizhong Wang, Swaroop Mishra, et al. Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks, 2022.
2307.14430#60
2307.14430#62
2307.14430
[ "2101.00027" ]
2307.14430#62
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
[64] Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners, 2021. [65] Richard T White. Research into learning hierarchies. Review of Educational Research, 43(3):361â 375, 1973. [66] Richard T. White and Robert M.
2307.14430#61
2307.14430#63
2307.14430
[ "2101.00027" ]
2307.14430#63
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Gagné. Past and future research on learning hierarchies. Educational Psychologist, 11(1):19â 28, 1974. [67] Xiaoxia Wu, Ethan Dyer, and Behnam Neyshabur. When do curricula work?, 2020. [68] Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, and Adams Wei Yu.
2307.14430#62
2307.14430#64
2307.14430
[ "2101.00027" ]
2307.14430#64
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Doremi: Optimizing data mixtures speeds up language model pretraining, 2023. [69] Sang Michael Xie, Shibani Santurkar, Tengyu Ma, and Percy Liang. Data selection for language models via importance resampling, 2023. [70] Amir R. Zamir, Alexander Sax, William Shen, Leonidas Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun 2018. [71] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. Opt: Open pre-trained transformer language models, 2022. [72] Yi Zhang, Arturs Backurs, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, and Tal Wagner. Unveiling transformers with lego: a synthetic reasoning task, 2022.
2307.14430#63
2307.14430#65
2307.14430
[ "2101.00027" ]
2307.14430#65
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
15 # A Broader Impacts and Limitations Broader Impacts As more LMs are developed, a key criteria for their adoption and utility is if they exhibit a wide array of useful capabilities, such as generating harmless content, summarizing essays, and being conversational with the user. While improvements in other parts of the LM development pipeline such as training and architecture are important, many recent advances in building LMs with a wide array of useful capabilities have come from the data itself [9, 15, 17, 42, 56]. Our work is fundamental in investigating how LMs learn and how to select data to learn skills more efï¬
2307.14430#64
2307.14430#66
2307.14430
[ "2101.00027" ]
2307.14430#66
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
ciently. However, we recognize that data selection methods can always be utilized to optimize for particular skills that may be considered malicious or negatively target or exclude speciï¬ c groups [2]. Furthermore, pre-trained LMs have been found to have various biases [6, 25, 35, 43]. Limitations The skills graph can either be provided (e.g., using a knowledge graph) or learned. Our work learns the skills graph using Algorithm 2 or Algorithm 3, which requires initial training runs on pairs of skills or each skill, respectively.
2307.14430#65
2307.14430#67
2307.14430
[ "2101.00027" ]
2307.14430#67
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
This can be made more efï¬ cient by performing these training runs on a smaller model and for fewer number of steps, but tradeoffs here have yet to be thoroughly investigated. SKILL-IT also assumes that the ordered skill set is provided; as discussed in sections 2.1 and 2.3, it is challenging to recover ordered skill sets simply via metadata attributes or embedding clustering. Otherwise, the best way to sample over collections of skills that form a complete or empty graph is random or stratiï¬ ed sampling with no ordering to exploit. Our loss-based clustering approach presented in section 2.3 demonstrates that grouping by losses can provide an explanation for how skills are deï¬ ned over data. An important direction for future work is to use such a clustering approach or other unsupervised algorithms in an end-to-end pipeline for skill discovery, skill graph learning, and data selection based on such skills. # B Additional Algorithmic Details # B.1 Derivation of SKILL-IT Update Rule First, we provide the derivation of our update rule from online mirror descent using the proximal point view [18]. We restate our optimization problem from (3): m co. 1 . minimize â > Levai,j (fr) ®) j=l Piss, preAk-1 m st Levat,j (fe) = Levat,j(feâ -1) (1 â @AT pr-1) Vj â ¬ [m],t=1,...,7 fi = O(fi-1, pr-1) Vt = 1,...T Let Li(p) = 2 ST , Leva (fer) = 4 i; Levat,j(@( fi, p)); that is, p is the mixture we must choose at time ¢ and L, m Lai=j m Li=j is the average loss per skill of the model after it is trained on p at round t. A greedy approximation of (5) is minimize Li(p), peAk-1 given the model and mixtures at previous rounds. A linear approximation of L;(p) is Li(p) © Li(pe-1) + (VLi-1 (pr-1), P â Di-1) (6) Then, the problem of minimizing ¯Lt(p) can be approximated as # argmin, ¢qxâ 1(7VLeâ 1 (pr-1); P)
2307.14430#66
2307.14430#68
2307.14430
[ "2101.00027" ]
2307.14430#68
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
argmin, ¢qxâ 1(7VLeâ 1 (pr-1); P) (7) after we drop terms from (6) that do not depend on p. Note that the 7 is a constant and does not impact the solution. The optimal solution to this problem is selecting the p that has the most weight on the slice with the largest gradient (e.g., a folow-the-leader sort of algorithm). To improve stability and prevent overfitting, we introduce regularization via a Bregman divergence Dp (p||pe_1) = h(p) â h(pe_1) â (Vh(pi-1), p â pe-1). After dropping terms that do not contain p, our problem is now argmin,,< axâ i (NV L1â 1(pr-1); p) + h(p) â (VA(pe-1); P) (8) Taking the gradient and setting it equal to 0 gives us nV Li-1(pi-1) + VA(p) â VA(pr-1) = 0 nV Li-1(pi-1) + VA(p) â VA(pr-1) = 0 (9)
2307.14430#67
2307.14430#69
2307.14430
[ "2101.00027" ]
2307.14430#69
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
16 (7) (9) Algorithm 2: LEARNGRAPH (Brute-Force) 1: Input: Ordered skill set S = {s1, . . . , sk}. Number of training steps H, base model f . 2: for j â [k] do 3: Train f on samples from Xsj for H steps and denote fH,j to be the model after training. Observe change in loss, δj # j = Leval,j(f ) â Leval,j(fH,j). 4: 5: end for 6: for i, j â [k] do 7: Train f on samples from Xsi â ª Xsj for H steps and denote fH,i,j to be the model after training. Observe change in loss, δi,j if δij j then # j = Leval,j(f ) â Leval,j(fH,i,j). 8: j > δj Draw edge si â sj and set Aij > 0. 9: 10: end if 11: 12: end for 13: Return Adjacency matrix A â Rkà k Algorithm 3: LEARNGRAPH (Approximate) 1: Input:
2307.14430#68
2307.14430#70
2307.14430
[ "2101.00027" ]
2307.14430#70
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Ordered skill sets Strain and Seval. Number of training steps H, base model f . 2: for i â [k] do 3: 4: 5: Train f on samples from Xstrain,i for H steps and denote fH,i to be the model after training. for j â [m] do Observe change in loss, δi If δi end for # j = Leval,j(f ) â Leval,j(fH,i). j > 0, draw edge strain,i â seval,j and set Aij > 0. 6 7: 8: end for 9:
2307.14430#69
2307.14430#71
2307.14430
[ "2101.00027" ]
2307.14430#71
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Return Bipartite adjacency submatrix A â Rkà m Similar to in standard multiplicative weights, we set h(p) = >>, pi np; and Vh(p) = [In p; + 1];. Then, # i Similar to in standard multiplicative weights, we set h(p) = >>, pi np; and Vh(p) = [In p; + 1];. Then, Inp' = â nV iLy-1(Pr-1) => Visi = Ppexp(â nViLe(p2)) (10) where V; is the ith element of the gradient. Now we wish to compute V;L:(pz) = 4 "4 Vi[Levai,j (fe41)] = 1ym Daj t VilLeval,j(@(ft, pe))]- Recall the dynamics model for Levat: # 1 m Leva (fi+1) = Levat,j(f:)(1 â Aljpe), qd) The gradient of this model with respect to each training skill si is â eeaus(a) = â AjjLevaij (ft) m =ViLi(p) = my â Ajj Leva; (ft) (12) Plugging this back into (10), m Piai = Diexp (0 > Aijloa(F)) j=l (13) where we can absorb the 1 # m into η. # B.2 Graph Learning Method We provide algorithms for learning the graph over an ordered skill set. In Algorithm 2, we discuss the brute-force approach for learning the adjacency matrix.
2307.14430#70
2307.14430#72
2307.14430
[ "2101.00027" ]
2307.14430#72
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
This approach only works when Seval â Strain (e.g. pre-training and ï¬ ne-tuning 17 cases), so we denote S = Strain in the algorithm box. In Algorithm 3, we discuss the linear approach for learning the adjacency matrix. This approach works even in the out-of-domain case when Seval and Strain are disjoint. In both approaches, the exact value of Aij can vary, but we can typically set it proportional to δi,j # j â δj j , the difference j, the change in loss itself, in the approximate case. The exact between the changes in loss, in the brute-force case or δi constructions and methods for learning each A in our experiments are in Appendix C.2. # C Additional Experimental Details # C.1 Datasets We present details about each dataset used, including information on the skills and the validation dataset. A summary is presented in Table 2. Dataset Skill # skills Validation data Alpaca Pile of Law LEGO Addition NI (pre-training) NI (Spanish QG) NI (stance detection) NI (out-of-domain) RedPajama Instruction type Legal data source Reasoning chain depth Digit Task category Task category à language Task category Task category Data source 38 31 5 3 23 4 2 59, 12 7 50 samples per skill 645 samples per skill 100 samples per skill 100 samples per skill 50 samples per task 100 samples per task 50 samples per task 400 samples per task LM eval harness Table 2: We list each dataset used as well as its corresponding skill. We include the number of skills in the training dataset, as well as details on how the validation dataset is constructed.
2307.14430#71
2307.14430#73
2307.14430
[ "2101.00027" ]
2307.14430#73
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
â ¢ Alpaca dataset [56]: the Alpaca dataset consists of 52K instruction examples that were generated from text-davinci-003. We applied the Berkeley Neural Parser [26, 27] to each instruction, keeping 40777 samples it was able to parse successfully. If the sample began with a question, we annotated it with the skill â questionâ , and otherwise we annotated it with the verb identiï¬ ed from the parser. We grouped the data into a total of 38 skills, such as "list", "edit", "calculate", "describe" and "identify".
2307.14430#72
2307.14430#74
2307.14430
[ "2101.00027" ]
2307.14430#74
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
â ¢ Pile of Law [21]: the Pile of Law dataset consists of various sources of legal and administrative data, ranging from tax rulings to the worldâ s constitutions. We evaluate on a subset of the Pile of Law validation dataset consisting of 13883 samples, where we selected max(645, source size) samples per source. We truncated each sample to be no more than 100K characters. â ¢ LEGO [72]: for the LEGO synthetic, we set k = 5 and sample 192000 points across the skills. Our validation dataset consisted of 100 samples per skill.
2307.14430#73
2307.14430#75
2307.14430
[ "2101.00027" ]
2307.14430#75
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
â ¢ Addition: for the 3-digit addition synthetic, we set k = 3 and sample 192000 points across the skills. We use a validation dataset of 100 samples per skill. â ¢ Natural Instructions [40, 63]: the Natural Instructions dataset is a large collection of tasks and their deï¬ nitions in natural language. For the pre-training setting, we used a set of 23 task categories that had the largest degree (in-degree + out- degree) in the learned skills graph, for a total of 1, 232, 437 samples and 425 tasks to select from. We evaluated on 50 samples per task.
2307.14430#74
2307.14430#76
2307.14430
[ "2101.00027" ]
2307.14430#76
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
For the ï¬ ne-tuning setting with Spanish question generation, we select data over 4 skills (Spanish question generation, Spanish question answering, English question generation, English question answering) for a total of 513210 samples and 212 tasks to select from. We evaluated on 100 samples per task. For the ï¬ ne-tuning setting with stance detection, we select data over 2 skills (stance detection, text matching) for a total of 50990 samples and 19 tasks to select from. We evaluated on 50 samples per task. For the out-of-domain setting, we select data over all 59 task categories for a total of 2, 417, 867 samples and 753 tasks to select from. The test split consisted of 12 task categories and 119 tasks, and we evaluated on min(400, task size) samples per task.
2307.14430#75
2307.14430#77
2307.14430
[ "2101.00027" ]
2307.14430#77
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
⠢ RedPajama [57]: the RedPajama dataset is a 1-trillion token dataset that aims to reproduce the LLaMA [59] training dataset. We select over the 7 data sources and evaluate using the LM evaluation harness [14]. 18 Alpaca - 2.00 1.75 1.50 1.00 0.75 0.50 0.00 ite suggest summarize e] selec tal t translate Tew] write Figure 8: Alpaca heatmap where i, jth entry is max(0, δi Diagonal entries are set to 0 for clearer visualization. j) (the change in loss on sj after training on si for 150 steps). # C.2 Graph Learning Details We describe how the skills graph was learned on each dataset. ⠢ Alpaca (Figure 8): we use Algorithm 3 and train for K = 150 steps per skill.
2307.14430#76
2307.14430#78
2307.14430
[ "2101.00027" ]
2307.14430#78
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Each edge i â j has a weight of δi j, the difference in loss on skill j before and after training on i. Next, we compare the average validation loss of skill-stratiï¬ ed sampling versus random sampling when we train for K = 1000 steps. We ï¬ nd that skill-stratiï¬ ed sampling only does 0.007 better than random sampling, conï¬ rming that Alpacaâ s dense skills graph suggests that random sampling is the best we can do. Pile of Law (Figure 9): we use Algorithm 3 and train for K = 150 steps.
2307.14430#77
2307.14430#79
2307.14430
[ "2101.00027" ]
2307.14430#79
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Each edge i â j has a weight of δi difference in loss on skill j before and after training on i. j, the â ¢ LEGO (Figure 10): we use both Algorithm 2 and Algorithm 3 and train for K = 6000 steps each. Each edge i â j has a weight of 0.5 if the amount of data associated with skill j that is needed to reach 0.01 validation loss is less when training on (i, j) than on j (edges are set to 0 if 0.01 validation loss is not reached, even if loss is decreasing). Each edge i â j is also set to 0.5 if training on i decreases loss directly on j. We set each diagonal entry of A to be 1.
2307.14430#78
2307.14430#80
2307.14430
[ "2101.00027" ]
2307.14430#80
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
â ¢ Addition (Figure 11): we use Algorithm 2 and train for K = 6000 steps. Each edge i â j has a weight of 0.5 if the amount of data associated with skill j that is needed to reach 0.01 validation loss is less when training on (i, j) than on j (edges are set to 0 if 0.01 validation loss is not reached, even if loss is decreasing). We set each diagonal entry of A to be 1.
2307.14430#79
2307.14430#81
2307.14430
[ "2101.00027" ]
2307.14430#81
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
â ¢ Natural Instructions (Figure 12, 13, 14): we use Algorithm 3. For the pre-training setting, we train for K = 600 steps and assign each edge i â j a weight δi j equal to the change in loss on j in the ï¬ rst 100 steps for all i, j â [k], including diagonal entries. For the ï¬ ne-tuning setting, we train for K = 600 steps and assign each edge i â j a weight δi j equal to the change in loss before and after training. For the out-of-domain setting, we train for K = 600 steps and assign each edge i â j a weight δi j equal to the change in loss before and after training in the ï¬ rst 100 steps. â ¢ RedPajama (Figure 15): we use Algorithm 3 and train for 1 billion tokens per data source. We assign each edge i â j a weight δi j equal to the change in perplexity on the validation datalsoa before and after training. # C.3 Training Details We describe the parameters used for SKILL-IT. # SKILL-IT pre-training
2307.14430#80
2307.14430#82
2307.14430
[ "2101.00027" ]
2307.14430#82
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
19 atticus_contracts bva_opinions canadian_decisions cfpb_creditcard contracts . congressional hearings courtlistener_docket_entry_documents ~ courtlistener_ opinions dol_ecab echr edgar eoir eurlex euro_parl federal_register founding docs hhs_alj_ opinions «os , __ icj-peij medicaid_policy qui dance nlrb decisions ~ oig olc_ memos r_legaladvice resource contracts scotus filings scotus oral arguments sec_administrative_ proceedings con tax_rulings un_debates _ _us_pills uspto_office_actions 2222 aooe SESE 2.08 00090 300 8 asd ga8e BBS eo ag So On Oo Q & Oo is S ressional_ hearing documen g) courtlistener_docket_ent Inlons ry | courtlist â dol_ecab ener op echr edgar eoir eurlex Pile of Law ) parl gister ding docs euro hhs alj_ opinions federal re foun ee dance ecisions medicaid _policy _g oig olc_ memos is S tracts scotus_filin scotus oral art sec_administrative aladvice r_legi resource cont it amen | argu: proceedings ~ tax_rulini gs un_ debates us pills uspto_ office actions - documents cy guidance - oli adv ivisory_ Opinions doj_guidance do e ftc_i 0.0 Figure 9: Pile of Law heatmap where i, jth entry is max(0, δi Diagonal entries are set to 0 for clearer visualization. j) (the change in loss on sj after training on si for 150 steps).
2307.14430#81
2307.14430#83
2307.14430
[ "2101.00027" ]
2307.14430#83
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
20 LEGO -1.0 4 - 0.8 N 0.6 sp) 0.4 st 0.2 Te) ~ 0.0 1 2 3 4 5 Figure 10: LEGO heatmap with k = 5 where i, jth entry is set to 0.5 if the number of steps needed to reach 0.01 loss on skill j when training on a balanced mixture of skills i and j is less than when training on skill j only. Addition -1.0 77 0.8 0.6 N 0.4 ~* 0.2 0.0 1 2 3 Figure 11: Addition heatmap with k = 3 where i, jth entry is set to 0.5 if the number of steps needed to reach 0.01 loss on skill j when training on a balanced mixture of skills i and j is less than when training on skill j only.
2307.14430#82
2307.14430#84
2307.14430
[ "2101.00027" ]
2307.14430#84
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
21 Natural Instructions answer _verification code_to_text discourse_connective_identification entity_generation entity_relation_classification information_extraction irony_detection preposition_prediction punctuation_error_detection question_answering question_generation question_understanding sentence_expansion sentiment_analysis stance_detection summarization text_categorization text_matching text_simplification text_to_code toxic_language_detection word_semantics wrong_candidate_generation 0.0 -e-e-e----------- | SSZLLSSSSIRSHOBIOSSHSVORS BS Isesoooog nts goushatogs SSSsSeseks ggg eSNNZe Sag BTS PSbeseessacoeeseesces Bos gnkoubog og eK Io POS Wo OG OGTR HLT 12 1G OG e iGo ak IAS POOAG I> lh 1 \8 ow Pesgaoo_ lOogsig eke sa MESS oEL RAS oD! B PU SSSRSEESeaes Sa e§s obese t aborts Beer 3 B Sess ateeakfen 3 R Fy 2 B°SBE os tose so $ 8 & 3 a o oH a9 Bon + & ¢ co go oBFTERs | 8 2 48 2S 3 go & Fe 8 2 8 ° 8 ¢2 1 8 2 ⠬ 3 2 g d a, 2 8 is) a 3 Figure 12: Natural Instructions heatmap where i, jth entry is max(0, δi 100 steps). Diagonal entries are set to 0 for clearer visualization. j) (the change in loss on sj after training on si for Stance detection Stance Detection - 1.0 0.5 Text Matching 0.0 Text Matching Stance Detection Spanish question generation English QA Spanish QA English QG Spanish QG English QA Spanish QA English QG Spanish QG Spanish question generation English QA Stance Detection - Spanish QA Text Matching English QG Spanish QG Text Matching English QA Spanish QA English QG Spanish QG Stance Detection
2307.14430#83
2307.14430#85
2307.14430
[ "2101.00027" ]
2307.14430#85
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Figure 13: Spanish question generation and stance detection heatmaps where i, jth entry is max(0, δi on sj after training on si for 100 steps). # (the change in loss 22 Natural Instructions answer verification code _to_text coherence classification commonsense classification dialogue_generation dialogue state tracking discourse_connective identification discourse relation classification entity_generation entity_relation_ classification explanation fact_verification fill in the blank gender classification grammar error detection 0.12 information extraction intent_identification irony detection linguistic_probing mathematics misc. 0.10 named_entity_recognition negotiation_strategy detection number conversion paraphrasing poem_generation __, Pos_tagging 0.08 preposition_prediction program. execution punctuation_error detection question answering - question_decomposition question_generation question_understanding sentence composition sentence compression sentence_expansion sentence ordering sentence_perturbation sentiment_analysis 0.04 spam classification speaker identification spelling error detection stance detection stereotype_detection story_composition 0.02 style transfer summarization text_categorization text_completion text_matching text_quality_evaluation 0.00 text_simplification 2 text_to_code toxic_language detection translation word_relation classification word semantics wrong_candidate generation -0.14 0.06 (â 2-8 -h- 8-8 -B 8) SESHESPEPEES SEECES OSE EES Be B SO POEE SS Eogs Gi seeass Beg tscsssaas BaosSo Rees oy! BBE Soy srg oo BS oS M16 aig Tics COST SSS ESaes wh SEER BS Bee 85 953as aes o 238 at oe Of i=) fois oo Bold g8 se Eee g 28 5 Ss =) Figure 14: Natural Instructions heatmap for out-of-domain setting where rows are for the training skills and columns are for the evaluation skills. The i, jth entry is max(0, δi
2307.14430#84
2307.14430#86
2307.14430
[ "2101.00027" ]
2307.14430#86
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
23 RedPajama arxiv books 0.2 c4 common_crawl github 0.1 stackexchange wikipedia 0.0 ge arc_easy hellaswag arc_challen lambada_openai winogrande Figure 15: RedPajama heatmap for out-of-domain setting where rows are for the training skills and columns are for the evaluation skills. The i, jth entry is max(0, δi ⠢ LEGO: η = 0.5, T = 6, w = 3.
2307.14430#85
2307.14430#87
2307.14430
[ "2101.00027" ]
2307.14430#87
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
We train for 6000 steps. ⠢ Addition: η = 0.1, T = 5, w = 3. We train for 6000 steps. ⠢ Natural Instructions (pre-training): η = 0.2, T = 1. We train for 5000 steps. For the LEGO random baseline, when we selected points at random, we used an imbalanced training dataset with proportions 1:1:1:3:5. For the addition random baseline, we used an imbalanced dataset with randomly selected proportions: 13:14:18. For the curriculum learning baselines, the pacing function, g(i), denotes the size of the subset of the highest scoring samples that we uniformly select from in the ith epoch.
2307.14430#86
2307.14430#88
2307.14430
[ "2101.00027" ]
2307.14430#88
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
We deï¬ ne our pacing function as g(i) = iH M , where H is the number of steps and M is 5 epochs for LEGO and NI, and 3 for addition. # SKILL-IT ï¬ ne-tuning â ¢ LEGO: η = 0.5, T = 10, w = 3. We train for 6000 steps. â ¢ Addition: η = 0.1, T = 5, w = 3.
2307.14430#87
2307.14430#89
2307.14430
[ "2101.00027" ]
2307.14430#89
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
We train for 6000 steps. ⠢ Natural Instructions (Spanish QG): η = 0.8, T = 6, w = 3. We train for 600 steps. ⠢ Natural Instructions (stance detection): η = 0.2, T = 6, w = 3. We train for 600 steps. # SKILL-IT out-of-domain ⠢ Natural Instructions: η = 0.2, T = 10, w = 3.
2307.14430#88
2307.14430#90
2307.14430
[ "2101.00027" ]
2307.14430#90
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
We train for 5000 steps. ⠢ RedPajama: η = 100, T = 1. We train for 3 billion tokens. All results are computed over 5 random seeds. Batch sizes of 32 and 64 were used for the LEGO and addition synthetic on the 125M and 1.3B parameter model, respectively. Batch sizes of 4 and 16 were used for the Natural Instructions experiments on the 125M and 1.3B parameter model. For the out-of-domain Natural Instructions experiment and Alpaca graph learning experiments, a learning rate of 5e-6 with linear scheduler and 50 warmup steps was used. For the Natural Instructions continual pre-training experiment on the 1.3B parameter model, a learning rate of 1e-6 was used. All other experiments used a learning rate of 5e-5. All experiments used AdamW with betas = 0.9, 0.999, eps = 1e-8, and weight decay = 0.01. A context window of 512 was used for all experiments except LEGO and addition, which used a window of 128. Experiments with the Addition dataset were run using an Nvidia RTX A6000. Other experiments using the GPT-Neo 125M parameter model were run on an Nvidia Tesla P100. Experiments using the GPT-Neo 1.3B parameter model were run on an Nvidia Tesla A100.
2307.14430#89
2307.14430#91
2307.14430
[ "2101.00027" ]
2307.14430#91
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
24 Model performance on LEGO skill 4 Model performance on LEGO skill 0.9 â â Trained on skill 4 â â Trained on skill 4 =â â Trained on skills 2, 4 â â Trained on skills 3, 4 ° Ny nv v Validation Loss ° | f=) id a oo 0 2000 4000 6000 0 2000 4000 6000 Steps Steps 4 Figure 16: Performance on LEGO skill 4 when training on skill 4, skills 2 and 4, and skills 3 and 4. Even though skill 3 and skill 4 share an edge in the LEGO syntheticâ s underlying reasoning chain (i.e. a model predicting correct for the fourth variable is one extra step beyond predicting correct for the third variable), we ï¬ nd that training on skills 2 and 4 helps improve performance on skill 4 more. Model performance on LEGO skill 3 (tree) Model performance on LEGO skill 2 (tree)
2307.14430#90
2307.14430#92
2307.14430
[ "2101.00027" ]
2307.14430#92
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
â Trained on skill 3 â Trained on skill 3 0.6 â â Trained on skill 2 a 0.6 â â Trained on skill 2 6 4 3 ic] 3 0.2 0.2 s 0.0 0.0 0 1000 2000 3000 0 1000 2000 3000 Steps Steps a 6 4 3 ic] 3 3 s Figure 17: Performance on LEGO skill 2 and 3 when training on skills 2 and 3. The reasoning pattern is a tree rather than a chain over k = 4 variables. Skills 2 and 3 are at the same â depthâ in the graph and both depend on skill 1, so there is positive inï¬ uence between the skills despite there being no edge between 2 and 3 in the LEGO reasoning graph. # D Additional Experimental Results # D.1 Additional examples of LEGO ordered skill sets For the LEGO synthetic, it may appear obvious that the skills graph is equivalent to the reasoning chain over the variables.
2307.14430#91
2307.14430#93
2307.14430
[ "2101.00027" ]
2307.14430#93
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
However, in Figure 16 we see that this is not the case. Training on skills 2 and 4 together results in lower loss on skill 4 than when trained on skill 4 alone. However, training on skills 3 and 4 together results in roughly the same loss on skill 4 as when training on skill 4 alone, even though skill 3 and skill 4 share an edge in the LEGO syntheticâ s underlying reasoning chain. This suggests that our intuition for how skills inï¬ uence each other does not always match how the model learns skills. Next, we consider a slightly more complex reasoning pattern on the LEGO synthetic. Instead of a chain, we construct a tree, where two variables in the LEGO synthetic are both deï¬ ned in terms of the same parent variable. For example, Input: c = val 1, y = not w, v = val c, w = not c. Output: y = 1. In this example, k = 4 and both v and w are written in terms of c, and the reasoning graph has edges 1 â 2, 1 â 3, 2 â 4. In this case, we see that training on skill 2 or skill 3 both improve losses on skills 2 and 3 (Figure 17). However, unlike the previous ï¬ gures, training on skills 2 and 4 or skills 3 and 4 do not signiï¬ cantly help reduce loss on skill 4 (Figure 18). Again, these measurements demonstrate that the reasoning graph does not necessarily equal the skills graph. # D.2 Unsupervised skill recovery We explore several clustering techniques for recovering the skills in the LEGO synthetic on the validation dataset. Our results are shown in Table 3.
2307.14430#92
2307.14430#94
2307.14430
[ "2101.00027" ]
2307.14430#94
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
We ï¬ rst cluster based on the pre-trained model embeddings of the last token and the average token. We also report 25 Model performance on LEGO skill 4 (tree) Model performance on LEGO skill 4 â Trained on skill 4 0.8 â Trained on skill 4 â â Trained on skills 2, 4 a â â Trained on skills 3, 4 $0.6 Bo6 y eoâ 4 B04 E \Y £0. £04 ic] zg 0.2 $0.2 0.0 0.0 0 1000 2000 3000 0 1000 2000 3000 Steps Steps # a ° 4 # ic] zg # (tree) Figure 18: Performance on LEGO skill 4 when training on skills 2, 4 and skills 3, 4.
2307.14430#93
2307.14430#95
2307.14430
[ "2101.00027" ]
2307.14430#95
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
We ï¬ nd that in both cases, the beneï¬ t from training on additional skills is minor. For instance, training on 2 and 4 reaches 0.01 loss in 2700 steps, while training on 4 only reaches it in 2100 steps. Cluster method Pretrained embedding of last token Pretrained embedding of average token Trained model embedding of last token Sentence-BERT embedding Losses over multiple runs Accuracy 24.8 ± 0.5 25.2 ± 1.1 38.4 ± 0.8 23.9 ± 0.7 61.0 ± 1.6
2307.14430#94
2307.14430#96
2307.14430
[ "2101.00027" ]
2307.14430#96
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Table 3: Clustering-based skill recovery methods on the LEGO dataset. The validation dataset we cluster consists of 500 points with k = 5, and results are reported over 10 runs of k-means. accuracies of clustering based on the trained model embeddingâ s last token, where we train the model using random sampling for 6000 steps, and clustering based on Sentence-BERT embeddings. Among these four methods, using the trained model embeddings has the highest accuracy of 38.4 points. Next, we cluster points based on losses. In particular, we do 10 runs, each for 6000 steps and with a randomly sampled mixture of skills. For each run, we evaluate the model on the validation dataset at 120 checkpoints. Then, each sample in the validation dataset has 1200 losses associated with it, comprising a feature vector for that sample. We perform k-means clustering on these features, which has an accuracy of 61.0 points, signiï¬ cantly higher than the second best accuracy of 38.4. # D.3 Full results for Section 4 # D.3.1 Per-skill performance In this section, we provide tables containing the per skill break-down of our results from Section 4. Continual Pre-training In the continual pre-training setting, we report two additional baselines that combine curriculum learning with skills. Curriculum learning has been proposed for multitask learning [60], in which groups of data are ranked by their average score and then trained in order of this ranking (with mixing of previously seen groups to avoid forgetting). We construct two baselines, Skill-curriculum and Skill-anticurriculum, using Algorithm 1 from [60]. In contrast to the random baseline which has imbalanced skills, this approach has knowledge of skills and thus uses a skill-stratiï¬ ed training dataset to sample from. We set the fraction of the previous group to be frac = 0.4, as we found that setting frac = 0.0 resulted in forgetting. We report loss per skill for the LEGO synthetic in Table 4, which corresponds to the results in Figure 4. We report accuracy per skill in Table 5 and Figure 19. We report the loss per skill for the Addition synthetic in Table 6, which also correspond to to the results in Figure 4.
2307.14430#95
2307.14430#97
2307.14430
[ "2101.00027" ]
2307.14430#97
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Finally, we report validation loss per task category for the Natural Instructions continual pre-training experiment in Table 7, where we ï¬ nd that SKILL-IT outperforms random sampling by 3.2% on average across skills. 26 Skill 1 Skill 2 Skill 3 Skill 4 Skill 5 Average Random Curriculum Anticurriculum Skill-stratiï¬ ed Skill-curriculum 0±0.000 0±0.000 0±0.000 0±0.000 0±0.000 Skill-anticurriculum 0.001±0.001 SKILL-IT 0±0.000 0.675±0.041 0.645±0.052 0.690±0.003 0.045±0.036 0.484±0.200 0.174±0.118 0.002±0.002 0.688±0.008 0.686±0.018 0.695±0.004 0.056±0.029 0.698±0.027 0.245±0.091 0.024±0.031 0.673±0.049 0.674±0.042 0.693±0.003 0.079±0.044 0.697±0.010 0.443±0.125 0.013±0.010 0.667±0.056 0.671±0.0459 0.689±0.004 0.050±0.025 0.689±0.007 0.566±0.118 0.022±0.021 0.541±0.031 0.535±0.029 0.554±0.001 0.046±0.022 0.514±0.040 0.286±0.060 0.012±0.008 Table 4: Results on validation loss per skill for LEGO pre-training experiment, averaged over 5 random seeds.
2307.14430#96
2307.14430#98
2307.14430
[ "2101.00027" ]
2307.14430#98
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Skill 1 Skill 2 Skill 3 Skill 4 Skill 5 Average 100.0±0.0 100.0±0.0 100.0±0.0 100.0±0.0 100.0±0.0 Skill-anticurriculum 100.0±0.0 100.0±0.0 Random Curriculum Anticurriculum Skill-stratiï¬ ed Skill-curriculum SKILL-IT 54.2±5.9 60.0±10.6 53.4±2.3 98.2±1.8 75.2±30.1 90.2±8.1 99.2±0.8 58.0±3.1 55.2±5.8 49.0±4.8 98.2±1.3 52.2±3.7 88.2±8.3 99.0±1.0 48.0±6.3 51.2±6.3 48.2±6.4 97.8±1.6 51.0±4.6 73.2±12.2 99.4±0.5 54.4±7.3 51.8±6.1 56.0±5.7 98.2±1.3 54.4±3.1 62.4±9.4 99.6±0.5 62.9±3.5 63.6±3.6 61.3±2.2 98.5±0.9 66.6±7.7 82.8±4.9 99.4±0.2 Table 5: Results on accuracy per skill (binary classiï¬ cation) for LEGO pre-training experiment, averaged over 5 random seeds.
2307.14430#97
2307.14430#99
2307.14430
[ "2101.00027" ]
2307.14430#99
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
LEGO Skill 1 LEGO Skill 2 LEGO Skill 3 1.0 . 0.8 p> is £ =] 8 0.6 < 0.4 0 2000 4000 6000 i} 2000 4000 6000 i} 2000 4000 6000 LEGO Skill 4 LEGO Skill 5 Average per skill 1.0 1.0 1.0 0.9 0.9 0.8 P08 0.8 5 0.6 5 0.7 0.7 : â â Random & â â Curriculum 0.6 â Anticurriculum 0.6 0.4 â Skill-stratified 0.5 â Skill-curriculum 0.5 â Skill-anticurriculum 04 0.2 â skit 0 2000 4000 6000 0 2000 4000 6000 0 2000 4000 6000 Steps Steps Steps Figure 19: Accuracy of SKILL-IT on each skill (binary classiï¬ cation) on the LEGO synthetic in the continual pre-training setting. SKILL-IT attains higher accuracy more quickly than baselines that both do and do not utilize the notion of skills. Skill 1 Skill 2 Skill 3 Average 0.008±0.007 0.009±0.011 0.007±0.010 0.012±0.011 0.016±0.013 Skill-anticurriculum 0.005±0.008 0.004±0.003 Random Curriculum Anticurriculum Skill-stratiï¬
2307.14430#98
2307.14430#100
2307.14430
[ "2101.00027" ]
2307.14430#100
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
ed Skill-curriculum SKILL-IT 0.020±0.019 0.010±0.008 0.012±0.013 0.015±0.015 0.019±0.013 0.037±0.028 0.009±0.007 0.005±0.005 0.008±0.010 0.008±0.017 0.010±0.020 0.010±0.003 1.141±1.126 0.013±0.017 0.011±0.014 0.009±0.010 0.009±0.014 0.012±0.016 0.015±0.010 0.395±0.371 0.009±0.011 Table 6: Results on validation loss per skill for Addition pre-training experiment, averaged over 5 random seeds.
2307.14430#99
2307.14430#101
2307.14430
[ "2101.00027" ]
2307.14430#101
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
27 2 8 Skill Random Curriculum Anticurriculum Skill-stratiï¬ ed Skill-curriculum Skill-anticurriculum Answer Veriï¬ cation Code to Text Discourse Connective Identiï¬ cation Entity Generation Entity Relation Classiï¬ cation Information Extraction Irony Detection Preposition Prediction Punctuation Error Detection Question Answering Question Generation Question Understanding Sentence Expansion Sentiment Analysis Stance Detection Summarization Text Categorization Text Matching Text Simpliï¬ cation Text to Code Toxic Language Detection Word Semantics Wrong Candidate Generation 2.297±0.058 0.246±0.021 2.927±0.069 2.033±0.421 1.020±0.147 2.154±0.040 3.024±0.154 0.979±0.124 2.950±0.065 2.277±0.005 2.617±0.005 1.965±0.051 2.501±0.095 3.203±0.012 1.810±0.100 2.961±0.015 2.488±0.023 2.177±0.059 2.155±0.023 0.560±0.037 3.106±0.027 2.092±0.027 2.438±0.021 2.368±0.055 0.203±0.019 3.084±0.067 2.012±0.437 1.014±0.140 2.247±0.037 3.798±0.095 0.887±0.147 3.120±0.052 2.367±0.006 2.777±0.015 2.199±0.059 2.598±0.097 3.415±0.016 1.775±0.120 3.149±0.023 2.692±0.029 2.232±0.055 2.193±0.039 0.495±0.036 3.496±0.017 2.334±0.034 2.606±0.039 2.391±0.061 1.099±0.115 2.932±0.058 2.363±0.234 1.533±0.138 2.352±0.042 2.942±0.158 1.488±0.213 2.961±0.064 2.398±0.006 2.695±0.008 2.060±0.033 2.583±0.074 3.209±0.010 2.231±0.128 3.041±0.014 2.553±0.006 2.316±0.048 2.325±0.033 1.215±0.052 3.058±0.029 2.156±0.064 2.519±0.027 2.180±0.059 0.178±0.016 2.805±0.071 1.803±0.384 0.859±0.131 2.140±0.037 2.680±0.146 0.845±0.152 3.264±0.061 2.542±0.004 2.783±0.021 1.958±0.051 2.225±0.095 3.278±0.014 1.385±0.070 2.960±0.019 2.570±0.015 2.152±0.061 1.926±0.026 0.490±0.029 3.199±0.024 1.916±0.043 2.506±0.026 2.249±0.116 0.126±0.009 2.891±0.001 1.853±0.483 0.825±0.022 2.286±0.022 3.889±0.066 0.941±0.019 3.019±0.010 2.689±0.001 3.062±0.006 2.385±0.022 2.311±0.076 3.607±0.012 1.361±0.114 3.323±0.028 3.001±0.007 2.324±0.004 2.037±0.005 0.433±0.014 3.758±0.025 1.784±0.048 2.849±0.029 2.325±0.085 1.232±0.070 2.925±0.011 2.068±0.719 0.959±0.009 2.338±0.025 2.099±0.152 1.044±0.029 3.360±0.013 2.707±0.016 2.876±0.032 2.100±0.054 2.408±0.074 3.308±0.015 1.823±0.189 3.021±0.013 2.635±0.014 2.304±0.035 2.156±0.011 1.455±0.086 3.155±0.050 2.424±0.038 2.574±0.018 Average 2.173±0.028 2.307±0.025 2.366±0.026 2.115±0.027 2.304±0.031 2.317±0.052 SKILL-IT 2.158±0.059 0.223±0.017 2.784±0.068 1.863±0.418 0.908±0.146 2.073±0.042 2.797±0.155 0.876±0.173 3.216±0.055 2.448±0.008 2.666±0.012 1.895±0.043 2.236±0.083 3.213±0.012 1.556±0.125 2.907±0.012 2.448±0.017 2.093±0.054 1.952±0.026 0.553±0.042 3.129±0.020 1.952±0.019 2.432±0.025 2.103±0.032
2307.14430#100
2307.14430#102
2307.14430
[ "2101.00027" ]
2307.14430#102
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Table 7: Validation loss per skill for data selection in continual pre-training setting on a subset of the Natural Instructions Dataset. Out-of-domain In Table 8, we provide a breakdown of validation loss per evaluation skill under random sampling on the training data, skill-stratiï¬ ed sampling over prerequisite skills (e.g., the nonzero rows in Figure 14), and SKILL-IT. Skill Random Skill-stratiï¬ ed SKILL-IT Answerability Classiï¬ cation Cause Effect Classiï¬ cation Coreference Resolution Data to Text Dialogue Act Recognition Grammar Error Correction Keyword Tagging Overlap Extraction Question Rewriting Textual Entailment Title Generation Word Analogy 3.048±0.003 2.068±0.004 3.101±0.003 2.363±0.004 2.329±0.009 2.399±0.008 2.744±0.005 2.749±0.011 2.591±0.009 2.472±0.002 3.027±0.002 1.665±0.016 3.076±0.002 2.101±0.005 3.142±0.004 2.388±0.005 2.364±0.010 2.418±0.009 2.760±0.007 2.763±0.012 2.628±0.011 2.503±0.003 3.037±0.002 1.682±0.015 3.043±0.003 2.067±0.006 3.099±0.004 2.359±0.005 2.320±0.009 2.389±0.007 2.733±0.006 2.733±0.010 2.586±0.010 2.468±0.002 3.015±0.002 1.668±0.016 Average 2.546±0.003 2.572±0.003 2.540±0.003 Table 8:
2307.14430#101
2307.14430#103
2307.14430
[ "2101.00027" ]
2307.14430#103
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Validation loss per skill for data selection in out-of-domain setting over Natural Instructions train task split and test task split. In Table 9 we provide a breakdown of the RedPajama experimentâ s accuracy per evaluation skill, corresponding to the results in Figure 7. 1 Billion Tokens 2 Billion Tokens 3 Billion Tokens Uniform SKILL-IT Uniform SKILL-IT Uniform SKILL-IT ARC Challenge (acc norm) ARC Easy (acc norm) BoolQ COPA HellaSwag (acc norm) LAMBADA OpenAI PIQA (acc norm) Winogrande 35.4 62.2 68.9 81.0 63.9 64.4 74.8 62.8 34.6 61.2 68.2 82.0 63.7 67.0 75.0 63.9 35.3 62.4 67.7 80.0 63.8 65.9 75.5 63.9 34.9 61.7 68.6 81.0 63.9 66.7 75.2 63.2 34.6 62.5 67.2 81.0 64.0 66.8 75.0 63.4 34.8 62.0 68.7 81.0 63.9 66.0 75.7 63.1 Average accuracy 64.2 64.4 64.3 64.4 64.3 64.4 Table 9: Performance of model trained on RedPajama with uniform sampling and SKILL-IT on LM evaluation harness. Unless otherwise noted, accuracy is reported for each task. # D.3.2 Weight trajectories We provide SKILL-ITâ s weight trajectories for each result. The weight per skill across training steps for the LEGO pre- training experiment corresponding to Figure 4 (left) is shown in Figure 20. We see that SKILL-IT initially allocates more weight to skill 2 and less to 1, 3, 4, 5. Since skill 1 is learned quickly, the weight on skill 1 immediately drops to below 0.1 at 1000 steps.
2307.14430#102
2307.14430#104
2307.14430
[ "2101.00027" ]
2307.14430#104
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
The weight on skills 3, 4, and 5 increase from around 0 to 3000 steps, during which their respective validation losses are higher than those of skills 1 and 2. Near the end of training, all losses are converging to 0, and so the weight per skill is roughly uniform. The weight per skill across training steps for the addition pre-training experiment corresponding to Figure 4 (right) is shown in Figure 21. SKILL-IT allocates more weight to skill 2, which has an edge to skill 1 as shown in Figure 11. It also allocates very little weight to skill 3, which is learned faster than the other two skills. Eventually, it puts more weight on skill 1, the hardest skill, and then converges to uniform sampling as all validation losses approach 0. The weight per skill across training steps for the LEGO ï¬ ne-tuning experiment and the Spanish question generation and stance detection experiments corresponding to Figure 5 is shown in Figure 22. Since there is only one target skill in these experiments, the mixture of weights approaches uniform as the loss on the target skill approaches 0.
2307.14430#103
2307.14430#105
2307.14430
[ "2101.00027" ]
2307.14430#105
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
It is interesting 29 Skill-It LEGO weights 0.35 â skill1 â Skill 2 0.30 â skill3 â skill 4 0.25 â skill5 ; | Weight per skill o N Oo 0.10 0.05 â â y y + + + \ 0 1000 2000 3000 4000 5000 6000 Steps Figure 20: Weight per skill for LEGO pre-training experiment. SKILL-IT initially allocates more weight to skill 2, but eventually puts more weight on harder skills (3, 4, 5) before converging to uniform sampling when all losses converge roughly to 0. to explore how to reduce edge weights and regularization so that the mixture approaches the target skill instead, although preliminary experiments where we decayed the edge weight and the strength of the Bregman divergence term did not appear better. We hypothesize that since training on a uniform mixture (as in Figure 3) did strictly better than training on the target skill and their loss curves did not intersect during the training run, it is better to allocate non-negligible weight on all skills throughout the training run. The weight per skill across training steps for the Natural Instructions out-of-domain experiment corresponding to Figure 6 is shown in Figure 23, where the legend is provided for the top 10 task categories with the largest weights. While the initial weights based on the skills graph roughly establishes the order of weight magnitude, the differences among the losses on the evaluation skills increases the range of weights as training continues. As validation losses saturate, the weights also converge to ï¬ xed values. # D.4 Experiments on 1.3B parameter model We demonstrate that the skills graph learned on the 125M parameter model can be used for data selection with the GPT-Neo- 1.3B model. We present results in the continual pre-training setting on the LEGO synthetic and Natural Instructions. All results are reported over 3 random seeds. For the LEGO experiment, we train for 1500 steps with η = 0.5, T = 30, w = 3. For the NI experiment, we train for 5000 steps with η = 0.2, and T = 1. The skill graphs were learned using the 125M parameter model as described in section C.2.
2307.14430#104
2307.14430#106
2307.14430
[ "2101.00027" ]
2307.14430#106
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
In Figure 24, we train the 1.3B model using SKILL-IT for the LEGO synthetic and ï¬ nd that it still outperforms random and skill-stratiï¬ ed sampling on average. In particular, while performance across sampling methods is similar for early skills, the discrepancy is larger for skill 5, for which SKILL-IT allocates more weight to dynamically. In Figure 25, we provide the weight trajectories of SKILL-IT. We observe that the weight trajectories are similar to that on the 125M parameter model, where initial weight is allocated towards skill 2. Later on, more weight is allocated towards skills 4 and 5, whose losses are higher, and eventually the weight mixture converges to uniform as all losses converge to near 0. In Table 10, we report performance of SKILL-IT with the 1.3B model on the Natural Instructions pre-training experiment and ï¬ nd that the trends from the smaller model holdâ SKILL-IT outperforms random and skill-stratiï¬ ed sampling on average. # D.5 Ablations We report ablations on the skills graph and the online component of SKILL-IT. Instead of using A in Algorithm 1, we study the performance when the identity matrix is used instead; intuitively, this corresponds to a misspeciï¬ ed skills graph where
2307.14430#105
2307.14430#107
2307.14430
[ "2101.00027" ]
2307.14430#107
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
30 Skill-It addition weights 0.55 0.50 0.45 0.40 0.35 0.30 Weight per skill 0.25 0.20 0.15 weights 0 1000 Steps 2000 3000 4000 5000 6000 Figure 21: Weight per skill for addition pre-training experiment. SKILL-IT initially allocates more weight to skill 2, which has an edge to skill 1, while allocating little weight to skill 3 which is learned quickly. Eventually, SKILL-IT puts more weight on the harder skill 1 before converging to uniform sampling when all losses roughly approach 0. Skill-It LEGO weights for learning skill 3 Skill-It weights for Spanish QG Weight per skill 0.35 Zane 0 â English QA â Spanish QA â English QG â Spanish QG Weight per skill Ss 0.450 0.425 Skill-It weights for Stance Detection a â Stance Detection â Text Matching Nao 0 1000 2000 3000 4000 5000 6000 Steps 0 100 200 300 400 500 600 Steps 0 100 200 300 400 500 600 Steps Figure 22: Weight per skill for ï¬ ne-tuning experiments. Left: LEGO; Center: Spanish question generation; Right: stance detection. Skill-It weights for Natural Instructions Skill-It weights for Natural Instructions S is S N Weight per skill ooo 9 6 oo 8 PB & es 2 ° N 2 ° 6 0 1000 2000 3000 Steps 4000 5000 question_generation question_answering text_categorization sentiment_analysis wrong_candidate_generation text_matching summarization information extraction question_understanding toxic_language_detection Figure 23: Weight per skill for Natural Instructions out-of-domain experiment. The legend shows the top 10 skills with the largest weight. While the relative order of weight magnitude does not change signiï¬ cantly across training, the incorporation of loss dramatically increases the range of the weights, showing the importance of an online algorithm.
2307.14430#106
2307.14430#108
2307.14430
[ "2101.00027" ]
2307.14430#108
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
31 LEGO Skill 1 LEGO Skill 2 LEGO Skill 3 10° 10° R107 4 3 10 10-1 g 10-2 4 10- $1073 10-2 3 to a 10-4 g 10-3 10-5 10-4 0 500 1000 1500 0 500 1000 1500 0 500 1000 1500 10° LEGO Skill 4 10° LEGO Skill 5 10° Average per skill B fo} -1 c107 10-1 10 a 4 g 10-2 10-2 10-2 3 â Random g -3| â â Skill-stratified > 10-3 - 10-3 10-8 â Skill-It 0 500 1000 1500 0 500 1000 1500 0 500 1000 1500 Steps Steps Steps Figure 24: Performance of SKILL-IT for LEGO pre-training setting when skills graph is learned on a 125M parameter model and used for data selection with a 1.3B model. SKILL-IT on average still outperforms random and skill-stratiï¬ ed sampling, suggesting that ï¬ ndings on ordered skill sets can transfer from small models to large models. Skill-It LEGO weights for 1.3B param model Weight per skill o fg 9S bo ow B® ° > 0.0 0 250 500 750 1000 1250 1500 Steps Figure 25: Weight per skill for LEGO pre-training experiment on 1.3B parameter model. The trajectories are similar to those of the 125M parameter model in Figure 20. SKILL-IT initially allocates more weight to skill 2, but eventually puts more weight on skills 4 and 5 before converging to uniform sampling when all losses converge to near 0.
2307.14430#107
2307.14430#109
2307.14430
[ "2101.00027" ]