text
stringlengths 575
4.75k
|
---|
2019). a good overview of distributed training methods can be found in narayananet al. (2021b), who combinetensor, pipeline, and data parallelism to train a language model with one trillion parameters on 3072 gpus. problems problem 7.1 a two-layer network with two hidden units in each layer can be defined as: this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 115 h i y = ϕ +ϕ a ψ +ψ a[θ +θ x]+ψ a[θ +θ x] 0 1 01 11 01 11 21 02 12 h i +ϕ a ψ +ψ a[θ +θ x]+ψ a[θ +θ x] , (7.34) 2 02 12 01 11 22 02 12 where the functions a[•] are relu functions. compute the derivatives of the output y with respecttoeachofthe13parametersϕ•,θ••,andψ••directly(i.e.,notusingthebackpropagation algorithm). the derivative of the relu function with respect to its input ∂a[z]/∂z is the indicator function i[z > 0], which returns one if the argument is greater than zero and zero otherwise (figure 7.6). problem 7.2 find an expression for the final term in each of the five chains of derivatives in equation 7.12. problem 7.3 what size are each of the terms in equation 7.19? problem 7.4 calculate the derivative ∂ℓ /∂f[x ,ϕ] for the least squares loss function: i i ℓ =(y −f[x ,ϕ])2. (7.35) i i i problem 7.5 calculate the derivative ∂ℓ /∂f[x ,ϕ] for the binary classification loss function: i i h (cid:2) (cid:3)i h (cid:2) (cid:3)i ℓ =−(1−y )log 1−sig f[x ,ϕ] −y log sig f[x ,ϕ] , (7.36) i i i i i where the function sig[•] is the logistic sigmoid and is defined as: 1 sig[z]= . (7.37) 1+exp[−z] problem 7.6∗ show that for z=β+ωh: ∂z =ωt, ∂h where∂z/∂hisamatrixcontainingtheterm∂z /∂h initsith columnandjth row. todothis, i j first find an expression for the constituent elements ∂z /∂h , and then consider the form that i j the matrix ∂z/∂h must take. problem 7.7 consider the case where we use the logistic sigmoid (see equation 7.37) as an activation function, so h = sig[f]. compute the derivative ∂h/∂f for this activation function. what happens to the derivative when the input takes (i) a large positive value and (ii) a large negative value? problem 7.8 consider using (i) the heaviside function and (ii) the rectangular function as activation functions: ( 0 z<0 heaviside[z]= , (7.38) 1 z≥0 draft: please send errata to [email protected] 7 gradients and initialization figure 7.9 computational graph for problem 7.12 and problem 7.13. adapted from domke (2010). and 8 ><0 z<0 rect[z]= 1 0≤z≤1. (7.39) >: 0 z>1 discuss why these functions are problematic for neural network training with gradient-based optimization methods. problem 7.9∗ consider a loss function ℓ[f], where f =β+ωh. we want to find how the loss ℓ changes when we change ω, which we’ll express with a matrix that contains the derivative ∂ℓ/∂ω at the ith row and jth column. find an expression for ∂f /∂ω and, using the chain ij i |
ij rule, show that: ∂ℓ ∂ℓ = ht. (7.40) ∂ω ∂f problem 7.10∗ derive the equations for the backward pass of the backpropagation algorithm for a network that uses leaky relu activations, which are defined as: ( α·z z<0 a[z]=relu[z]= , (7.41) z z≥0 where α is a small positive constant (typically 0.1). problem7.11considertraininganetworkwithfiftylayers,whereweonlyhaveenoughmemory to store the pre-activations at every tenth hidden layer during the forward pass. explain how to compute the derivatives in this situation using gradient checkpointing. problem 7.12∗ this problem explores computing derivatives on general acyclic computational graphs. consider the function: (cid:2) (cid:3) y=exp exp[x]+exp[x]2 +sin[exp[x]+exp[x]2]. (7.42) we can break this down into a series of intermediate computations so that: this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 117 f = exp[x] 1 f = f2 2 1 f = f +f 3 1 2 f = exp[f ] 4 3 f = sin[f ] 5 3 y = f +f . (7.43) 4 5 the associated computational graph is depicted in figure 7.9. compute the derivative ∂y/∂x by reverse-mode differentiation. in other words, compute in order: ∂y ∂y ∂y ∂y ∂y ∂y , , , , and , (7.44) ∂f ∂f ∂f ∂f ∂f ∂x 5 4 3 2 1 using the chain rule in each case to make use of the derivatives already computed. problem 7.13∗ for the same function in problem 7.42, compute the derivative ∂y/∂x by forward-mode differentiation. in other words, compute in order: ∂f ∂f ∂f ∂f ∂f ∂y 1, 2, 3, 4, 5, and , (7.45) ∂x ∂x ∂x ∂x ∂x ∂x using the chain rule in each case to make use of the derivatives already computed. why do we not use forward-mode differentiation when we calculate the parameter gradients for deep networks? problem 7.14 consider a random variable a with variance var[a] = σ2 and a symmetrical distribution around the mean e[a]= 0. prove that if we pass this variable through the relu function: ( 0 a<0 b=relu[a]= , (7.46) a a≥0 then the second moment of the transformed variable is e[b2]=σ2/2. problem 7.15 what would you expect to happen if we initialized all of the weights and biases in the network to zero? problem 7.16 implement the code in figure 7.8 in pytorch and plot the training loss as a function of the number of epochs. problem 7.17 change the code in figure 7.8 to tackle a binary classification problem. you will need to (i) change the targets y so they are binary, (ii) change the network to predict numbers between zero and one (iii) change the loss function appropriately. draft: please send errata to [email protected] 8 measuring performance previous chapters described neural network models, loss functions, and training algo- rithms. this chapter considers how to measure the performance of the trained models. withsufficientcapacity(i.e.,numberofhiddenunits),aneuralnetworkmodelwilloften perform perfectly on the training data. however, this does not necessarily mean it will generalize well to new test data. we will see that the test errors have three distinct causes and that their relative contributions depend on (i) the inherent uncertainty in the task, (ii) the amount of training data, and (iii) the choice of model. the latter dependency raises the issue of hyperparametersearch. wediscusshowtoselectboththemodelhyperparameters(e.g., the number of hidden layers and the number of hidden |
units in each) and the learning algorithm hyperparameters (e.g., the learning rate and batch size). 8.1 training a simple model we explore model performance using the mnist-1d dataset (figure 8.1). this con- sists of ten classes y ∈ {0,1,...,9}, representing the digits 0–9. the data are derived from 1d templates for each of the digits. each data example x is created by randomly transforming one of these templates and adding noise. the full training dataset {x ,y } i i consistsofi=4000trainingexamples,eachconsistingofd =40dimensionsrepresenting i the horizontal offset at 40 positions. the ten classes are drawn uniformly during data generation, so there are ∼400 examples of each class. weuseanetworkwithd =40inputsandd =10outputswhicharepassedthrough i o a softmax function to produce class probabilities (see section 5.5). the network has two hidden layers with d=100 hidden units each. it is trained using stochastic gradient descent with batch size 100 and learning rate 0.1 for 6000 steps (150 epochs) with a multiclass cross-entropy loss (equation 5.24). figure 8.2 shows that the training error decreases as training proceeds. the training data are classified perfectly after about problem8.1 4000 steps. the training loss also decreases, eventually approaching zero. however,thisdoesn’timplythattheclassifierisperfect; themodelmighthavemem- this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.8.1 training a simple model 119 figure8.1mnist-1d.a)templatesfor10classesy∈{0,...,9},basedondigits 0–9. b) training examples x are created by randomly transforming a template andc)addingnoise. d)thehorizontaloffsetofthetransformedtemplateisthen sampled at 40 vertical positions. adapted from (greydanus, 2020) figure 8.2mnist-1dresults. a)percentclassificationerrorasafunctionofthe training step. the training set errors decrease to zero, but the test errors do not drop below ∼40%. this model doesn’t generalize well to new test data. b) loss as a function of the training step. the training loss decreases steadily toward zero. the test loss decreases at first but subsequently increases as the model becomes increasingly confident about its (wrong) predictions. draft: please send errata to [email protected] 8 measuring performance figure 8.3 regression function. solid black line shows ground truth function. togeneratei trainingexamples{x ,y }, i i the input space x ∈ [0,1] is divided intoi equalsegmentsandonesamplex i is drawn from a uniform distribution within each segment. the correspond- ing value y is created by evaluating the i functionatx andaddinggaussiannoise i (gray region shows ±2 standard devia- tions). the test data are generated in the same way. orized the training set but be unable to predict new examples. to estimate the true performance, we need a separate test set of input/output pairs {x ,y }. to this end, we i i generate 1000 more examples using the same process. figure 8.2a also shows the errors for this test data as a function of the training step. these decrease as training proceeds, but only to around 40%. this is better than the chance error rate of 90% error rate but far worse than for the training set; the model has not generalized well to the test data. thetestloss(figure8.2b)decreasesforthefirst1500trainingstepsbutthenincreases notebook8.1 again. at this point, the test error rate is fairly constant; the model makes the same mnist-1d performance mistakes but with increasing confidence. this decreases the probability of the correct answers and thus increases the negative log-likelihood. this increasing confidence is a side-effectofthesoftmaxfunction;thepre-softmaxactivationsaredriventoincreasingly extremevaluestomaketheprobabilityofthetrainingdataapproachone(seefigure5.10). 8.2 sources of error we now consider the sources of the errors that occur when a model fails to generalize. tomakethiseasiertovisualize, |
wereverttoa1dlinearleastsquaresregressionproblem where we know exactly how the ground truth data were generated. figure 8.3 shows a quasi-sinusoidal function; both training and test data are generated by sampling input valuesintherange[0,1],passingthemthroughthisfunction,andaddinggaussiannoise with a fixed variance. wefitasimplifiedshallowneuralnettothisdata(figure8.4). theweightsandbiases that connect the input layer to the hidden layer are chosen so that the “joints” of the function are evenly spaced across the interval. if there are d hidden units, then these joints will be at 0,1/d,2/d,...,(d −1)/d. this model can represent any piecewise linear function with d equally sized regions in the range [0,1]. as well as being easy to understand, this model also has the advantage that it can be fit in closed form without theneedforstochasticoptimizationalgorithms(seeproblem8.3). consequently, wecan problems8.2–8.3 guarantee to find the global minimum of the loss function during training. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.8.2 sources of error 121 figure8.4simplifiedneuralnetworkwiththreehiddenunits. a)theweightsand biases between the input and hidden layer are fixed (dashed arrows). b–d) they arechosensothatthehiddenunitactivationshaveslopeone,andtheirjointsare equally spaced across the interval, with joints at x = 0, x = 1/3, and x = 2/3, respectively. modifying the remaining parameters ϕ={β,ω ,ω ,ω } can create 1 2 3 any piecewise linear function over x ∈ [0,1] with joints at 1/3 and 2/3. e–g) three example functions with different values of the parameters ϕ. draft: please send errata to [email protected] 8 measuring performance figure8.5sourcesoftesterror. a)noise. datagenerationisnoisy,soevenifthe modelexactlyreplicatesthetrueunderlyingfunction(blackline),thenoiseinthe testdata(graypoints)meansthatsomeerrorwillremain(grayregionrepresents two standard deviations). b) bias. even with the best possible parameters, the three-region model (cyan line) cannot exactly fit the true function (black line). this bias is another source of error (gray regions represent signed error). c) variance. inpractice,wehavelimitednoisytrainingdata(orangepoints). when we fit the model, we don’t recover the best possible function from panel (b) but aslightlydifferentfunction(cyanline)thatreflectsidiosyncrasiesofthetraining data. this provides an additional source of error (gray region represents two standard deviations). figure 8.6 shows how this region was calculated. 8.2.1 noise, bias, and variance there are three possible sources of error, which are known as noise, bias, and variance respectively (figure 8.5): noise thedatagenerationprocessincludestheadditionofnoise, sotherearemultiple possiblevalidoutputsyforeachinputx(figure8.5a). thissourceoferrorisinsurmount- able for the test data. note that it does not necessarily limit the training performance; we will likely never see the same input x twice during training, so it is still possible to fit the training data perfectly. noise may arise because there is a genuine stochastic element to the data generation process,becausesomeofthedataaremislabeled,orbecausetherearefurtherexplanatory variables that were not observed. in rare cases, noise may be absent; for example, a network might approximate a function that is deterministic but requires significant computation to evaluate. however, noise is usually a fundamental limitation on the possible test performance. bias a second potential source of error may occur because the model is not flexible enough to fit the true function perfectly. for example, the three-region neural network model cannot exactly describe the quasi-sinusoidal function, even when the parameters are chosen optimally (figure 8.5b). this is known as bias. this work is subject to a creative commons cc-by |
-nc-nd license. (c) mit press.8.2 sources of error 123 variance we have limited training examples, and there is no way to distinguish sys- tematic changes in the underlying function from noise in the underlying data. when we fit a model, we do not get the closest possible approximation to the true underly- ing function. indeed, for different training datasets, the result will be slightly different each time. this additional source of variability in the fitted function is termed variance (figure 8.5c). in practice, there might also be additional variance due to the stochastic learning algorithm, which does not necessarily converge to the same solution each time. 8.2.2 mathematical formulation of test error we now make the notions of noise, bias, and variance mathematically precise. consider a 1d regression problem where the data generation process has additive noise with vari- ance σ2 (e.g., figure 8.3); we can observe different outputs y for the same input x, so for each x, there is a distribution pr(y|x) with expected value (mean) µ[x]: appendixc.2 expectation z µ[x]=e [y[x]]= y[x]pr(y|x)dy, (8.1) y (cid:2) (cid:3) and fixed noise σ2 =e (µ[x]−y[x])2 . here we have used the notation y[x] to specify y that we are considering the output y at a given input position x. now consider a least squares loss between the model prediction f[x,ϕ] at position x and the observed value y[x] at that position: (cid:0) (cid:1) l[x] = f[x,ϕ]−y[x] 2 (8.2) (cid:16)(cid:0) (cid:1) (cid:0) (cid:1)(cid:17) 2 = f[x,ϕ]−µ[x] + µ[x]−y[x] (cid:0) (cid:1) (cid:0) (cid:1)(cid:0) (cid:1) (cid:0) (cid:1) = f[x,ϕ]−µ[x] 2+2 f[x,ϕ])−µ[x] µ[x]−y[x] + µ[x]−y[x] 2, where we have both added and subtracted the mean µ[x] of the underlying function in the second line and have expanded out the squared term in the third line. the underlying function is stochastic, so this loss depends on the particular y[x] we observe. the expected loss is: (cid:2) (cid:3) h(cid:0) (cid:1) (cid:0) (cid:1)(cid:0) (cid:1) (cid:0) (cid:1) i e l[x] = e f[x,ϕ]−µ[x] 2+2 f[x,ϕ]−µ[x] µ[x]−y[x] + µ[x]−y[x] 2 y y (cid:0) (cid:1) (cid:0) (cid:1)(cid:0) (cid:1) (cid:2) (cid:3) = f[x,ϕ]−µ[x] 2+2 f[x,ϕ]−µ[x] µ[x]−e [y[x]] +e (µ[x]−y[x])2 (cid:0) (cid:1) (cid:0) (cid:1) hy(cid:0) (cid:1)yi = f[x,ϕ]−µ[x] 2+2 f[x,ϕ]−µ[x] ·0+e µ[x]−y[x] 2 y (cid:0) (cid:1) = f[x,ϕ]−µ[x] 2+σ2, (8.3) wherewehavemadeuseoftherulesformanipulatingexpectations. inthesecondline,we |
appendixc.2.1 havedistributedtheexpectationoperatorandremoveditfromtermswithnodependence expectationrules on y[x], and in the third line, we note that the second term is zero since e [y[x]]=µ[x] y by definition. finally, in the fourth line, we have substituted in the definition of the draft: please send errata to [email protected] 8 measuring performance noise σ2. we can see that the expected loss has been broken down into two terms; the first term is the squared deviation between the model and the true function mean, and the second term is the noise. thefirsttermcanbefurtherpartitionedintobiasandvariance. theparametersϕof themodelf[x,ϕ]dependonthetrainingdatasetd ={x ,y },somoreproperly,weshould i i write f[x,ϕ[d]]. the training dataset is a random sample from the data generation process; with a different sample of training data, we would learn different parameter values. theexpectedmodeloutputf [x]withrespecttoallpossibledatasetsd ishence: µ h (cid:2) (cid:3)i fµ[x]=ed f x,ϕ[d] . (8.4) returning to the first term of equation 8.3, we add and subtract f [x] and expand: µ (cid:0) (cid:1) f[x,ϕ[d]]−µ[x] 2 (8.5) (cid:16)(cid:0) (cid:1) (cid:0) (cid:1)(cid:17) 2 = f[x,ϕ[d]]−f [x] + f [x]−µ[x] µ µ (cid:0) (cid:1) (cid:0) (cid:1)(cid:0) (cid:1) (cid:0) (cid:1) = f[x,ϕ[d]]−f [x] 2+2 f[x,ϕ[d]]−f [x] f [x]−µ[x] + f [x]−µ[x] 2. µ µ µ µ we then take the expectation with respect to the training dataset d: h(cid:0) (cid:1) i h(cid:0) (cid:1) i (cid:0) (cid:1) ed f[x,ϕ[d]]−µ[x] 2 =ed f[x,ϕ[d]]−fµ[x] 2 + fµ[x]−µ[x] 2, (8.6) where we have simplified using similar steps as for equation 8.3. finally, we substitute this result into equation 8.3: h i h(cid:0) (cid:1) i (cid:0) (cid:1) ed ey[l[x]] =ed f[x,ϕ[d]]−fµ[x] 2 + fµ[x]−µ[x] 2+ σ2. (8.7) | {z } | {z } |{z} variance bias noise thisequationsaysthattheexpectedlossafterconsideringtheuncertaintyinthetraining data d and the test data y consists of three additive components. the variance is uncertaintyinthefittedmodelduetotheparticulartrainingdatasetwesample. thebias isthe systematic deviationof the modelfrom the meanof the functionweare modeling. the noise is the inherent uncertainty in the true mapping from input to output. these three sources of error will be present for any task. they combine additively for linear regression with a least squares loss. however, their interaction can be more complex for other types of problems. 8.3 reducing error in the previous section, we saw that test error results from three sources: noise, bias, and variance. the noise component is insurmountable; there is nothing we can do to circumvent this, and it represents a fundamental limit on model performance. however, it is possible to reduce the other two terms. this work is subject to |
10–19.12adaptedfromhttp://tinyurl.com/4ueyhtsu. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.chapter 1 introduction artificial intelligence, orai,isconcernedwithbuildingsystemsthatsimulateintelligent behavior. it encompasses a wide range of approaches, including those based on logic, search, and probabilistic reasoning. machine learning is a subset of ai that learns to make decisions by fitting mathematical models to observed data. this area has seen explosive growth and is now (incorrectly) almost synonymous with the term ai. a deep neural network is a type of machine learning model, and when it is fitted to data, this is referred to as deep learning. at the time of writing, deep networks are the most powerful and practical machine learning models and are often encountered in day-to-day life. it is commonplace to translate text from another language using a natural language processing algorithm, to search the internet for images of a particular objectusingacomputervisionsystem,ortoconversewithadigitalassistantviaaspeech recognition interface. all of these applications are powered by deep learning. as the title suggests, this book aims to help a reader new to this field understand the principles behind deep learning. the book is neither terribly theoretical (there are no proofs) nor extremely practical (there is almost no code). the goal is to explain the underlying ideas; after consuming this volume, the reader will be able to apply deep learning to novel situations where there is no existing recipe for success. machinelearningmethodscancoarselybedividedintothreeareas: supervised,unsu- pervised, and reinforcement learning. at the time of writing, the cutting-edge methods in all three areas rely on deep learning (figure 1.1). this introductory chapter describes thesethreeareasatahighlevel,andthistaxonomyisalsolooselyreflectedinthebook’s organization. whether we like it or not, deep learning is poised to change our world, and this change will not all be positive. hence, this chapter also contains brief primer on ai ethics. we conclude with advice on how to make the most of this book. 1.1 supervised learning supervised learning models define a mapping from input data to an output prediction. in the following sections, we discuss the inputs, the outputs, the model itself, and what is meant by “training” a model. draft: please send errata to [email protected] 1 introduction figure 1.1 machine learning is an area of artificial intelligence that fits math- ematical models to observed data. it can coarsely be divided into supervised learning, unsupervised learning, and re- inforcement learning. deep neural net- works contribute to each of these areas. 1.1.1 regression and classification problems figure 1.2 depicts several regression and classification problems. in each case, there is a meaningfulreal-worldinput(asentence,asoundfile,animage,etc.),andthisisencoded asavectorofnumbers. thisvectorformsthemodelinput. themodelmapstheinputto an output vector which is then “translated” back to a meaningful real-world prediction. for now, we focus on the inputs and outputs and treat the model as a black box that ingests a vector of numbers and returns another vector of numbers. the model in figure 1.2a predicts the price of a house based on input characteristics such as the square footage and the number of bedrooms. this is a regression problem becausethemodelreturnsacontinuousnumber(ratherthanacategoryassignment). in contrast, the model in 1.2b takes the chemical structure of a molecule as an input and predicts both the melting and boiling points. this is a multivariate regression problem since it predicts more than one number. themodelinfigure1.2creceivesatextstringcontainingarestaurantreviewasinput and predicts whether the review is positive or negative. this is a binary classification problem because the model attempts to assign the input to one of two categories. the output vector contains the probabilities that the input belongs to each category. fig- ures 1.2d and 1.2e depict multiclass classification problems. here, the model assigns the input to one of n >2 categories. in the first case, the input is an audio file, and the model predicts which genre of music it contains. in the second case, the input is an image, and the model predicts |
a creative commons cc-by-nc-nd license. (c) mit press.8.3 reducing error 125 8.3.1 reducing variance recall that the variance results from limited noisy training data. fitting the model to two different training sets results in slightly different parameters. it follows we can reduce the variance by increasing the quantity of training data. this averages out the inherent noise and ensures that the input space is well sampled. figure 8.6 shows the effect of training with 6, 10, and 100 samples. for each dataset size, we show the best-fitting model for three training datasets. with only six samples, thefittedfunctionisquitedifferenteachtime: thevarianceissignificant. asweincrease thenumberofsamples, thefittedmodelsbecomeverysimilar, andthevariancereduces. in general, adding training data almost always improves test performance. 8.3.2 reducing bias the bias term results from the inability of the model to describe the true underlying function. thissuggeststhatwecanreducethiserrorbymakingthemodelmoreflexible. this is usually done by increasing the model capacity. for neural networks, this means adding more hidden units and/or hidden layers. in the simplified model, adding capacity corresponds to adding more hidden units so that the interval [0,1] is divided into more linear regions. figures 8.7a–c show that (unsurprisingly) this does indeed reduce the bias; as we increase the number of linear regions to ten, the model becomes flexible enough to fit the true function closely. 8.3.3 bias-variance trade-off however, figures 8.7d–f show an unexpected side-effect of increasing the model capac- ity. for a fixed-size training dataset, the variance term increases as the model capacity increases. consequently, increasing the model capacity does not necessarily reduce the test error. this is known as the bias-variance trade-off. figure8.8exploresthisphenomenon. inpanelsa–c),wefitthesimplifiedthree-region model to three differentdatasets of fifteen points. although the datasets differ, the final model is much the same; the noise in the dataset roughly averages out in each linear region. in panels d–f), we fit a model with ten regions to the same three datasets. this model has more flexibility, but this is disadvantageous; the model certainly fits the data better, and the training error will be lower, but much of the extra descriptive power is devoted to modeling the noise. this phenomenon is known as overfitting. we’ve seen that as we add capacity to the model, the bias will decrease, but the variance will increase for a fixed-size training dataset. this suggests that there is an optimal capacity where the bias is not too large and the variance is still relatively small. figure 8.9 shows how these terms vary numerically for the toy model as we increase notebook8.2 the capacity, using the data from figure 8.8. for regression models, the total expected bias-variance erroristhesumofthebiasandthevariance, andthissumisminimizedwhenthemodel trade-off capacity is four (i.e., with four hidden units and four linear regions). draft: please send errata to [email protected] 8 measuring performance figure 8.6 reducing variance by increasing training data. a–c) the three-region model fitted to three different randomly sampled datasets of six points. the fitted model is quite different each time. d) we repeat this experiment many times and plot the mean model predictions (cyan line) and the variance of the model predictions (gray area shows two standard deviations). e–h) we do the same experiment, but this time with datasets of size ten. the variance of the predictions is reduced. i–l) we repeat this experiment with datasets of size 100. now the fitted model is always similar, and the variance is small. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.8.4 double descent 127 figure 8.7 bias and variance as a function of model capacity. a–c) as we in- creasethenumberofhiddenunitsofthetoymodel,thenumberoflinearregions increases, and the model becomes able to fit the true function closely; the bias (gray region) decreases. d–f) unfortunately, increasing the model capacity has theside-effectofincreasingthevarianceterm |
(grayregion). thisisknownasthe bias-variance trade-off. 8.4 double descent in the previous section, we examined the bias-variance trade-off as we increased the capacity of a model. let’s now return to the mnist-1d dataset and see whether this happensinpractice. weuse10,000trainingexamples, testwithanother5,000examples and examine the training and test performance as we increase the capacity (number of parameters) in the model. we train the model with adam and a step size of 0.005 using a full batch of 10,000 examples for 4000 steps. figure 8.10a shows the training and test error for a neural network with two hid- den layers as the number of hidden units increases. the training error decreases as the capacity grows and quickly becomes close to zero. the vertical dashed line represents the capacity where the model has the same number of parameters as there are training examples, but the model memorizes the dataset before this point. the test error de- creasesasweaddmodelcapacitybutdoesnotincreaseaspredictedbythebias-variance trade-off curve; it keeps decreasing. in figure 8.10b, we repeat this experiment, but this time, we randomize 15% of the draft: please send errata to [email protected] 8 measuring performance figure 8.8 overfitting. a–c) a model with three regions is fit to three different datasets of fifteen points each. the result is similar in all three cases (i.e., the variance is low). d–f) a model with ten regions is fit to the same datasets. the additionalflexibilitydoesnotnecessarilyproducebetterpredictions. whilethese threemodelseachdescribethetrainingdatabetter,theyarenotnecessarilycloser to the true underlying function (black curve). instead, they overfit the data and describe the noise, and the variance (difference between fitted curves) is larger. figure 8.9 bias-variance trade-off. the bias and variance terms from equa- tion 8.7 are plotted as a function of the model capacity (number of hidden units / linear regions) in the simplified model using training data from figure 8.8. as thecapacityincreases,thebias(solidor- ange line) decreases, but the variance (solid cyan line) increases. the sum of these two terms (dashed gray line) is minimized when the capacity is four. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.8.4 double descent 129 training labels. once more, the training error decreases to zero. this time, there is more randomness, and the model requires almost as many parameters as there are data pointstomemorizethedata. thetesterrordoesshowthetypicalbias-variancetrade-off as we increase the capacity to the point where the model fits the training data exactly. however, then it does something unexpected; it starts to decrease again. indeed, if we add enough capacity, the test loss reduces to below the minimal level that we achieved in the first part of the curve. this phenomenon is known as double descent. for some datasets like mnist, it is presentwiththeoriginaldata(figure8.10c). forothers,likemnist-1dandcifar-100 (figure 8.10d), it emerges or becomes more prominent when we add noise to the labels. notebook8.3 the first part of the curve is referred to as the classical or under-parameterized regime, doubledescent andthesecondpartasthemodernorover-parameterizedregime. thecentralpartwhere the error increases is termed the critical regime. 8.4.1 explanation thediscoveryofdoubledescentisrecent,unexpected,andsomewhatpuzzling. itresults fromaninteractionoftwophenomena. first, thetestperformancebecomestemporarily worse when the model has just enough capacity to memorize the data. second, the test performance continues to improve with capacity even after the training performance is perfect. thefirstphenomenonisexactlyaspredictedbythebias-variancetrade-off. the second phenomenon is more confusing; it’s unclear why performance should be better in the over-parameterized regime, given that there are now not even enough training data points to |
constrain the model parameters uniquely. to understand why performance continues to improve as we add more parameters, note that once the model has enough capacity to drive the training loss to near zero, the model fits the training data almost perfectly. this implies that further capacity problems8.4–8.5 cannot help the model fit the training data any better; any change must occur between the training points. the tendency of a model to prioritize one solution over another as it extrapolates between data points is known as its inductive bias. the model’s behavior between data points is critical because, in high-dimensional space,thetrainingdataareextremelysparse. themnist-1ddatasethas40dimensions, and we trained with 10,000 examples. if this seems like plenty of data, consider what would happen if we quantized each input dimension into 10 bins. there would be 1040 bins in total, constrained by only 105 examples. even with this coarse quantization, there will only be one data point in every 1035 bins! the tendency of the volume of high-dimensional space to overwhelm the number of training points is termed the curse of dimensionality. theimplicationisthatproblemsinhighdimensionsmightlookmorelikefigure8.11a; there are small regions of the input space where we observe data with significant gaps between them. the putative explanation for double descent is that as we add capacity to the model, it interpolates between the nearest data points increasingly smoothly. in the absence of information about what happens between the training points, assuming smoothness is sensible and will probably generalize reasonably to new data. draft: please send errata to [email protected] 8 measuring performance figure 8.10 double descent. a) training and test loss on mnist-1d for a two- hidden layer network as we increase the number of hidden units (and hence pa- rameters) in each layer. the training loss decreases to zero as the number of parameters approaches the number of training examples (vertical dashed line). the test error does not show the expected bias-variance trade-off but continues todecreaseevenafterthemodelhasmemorizedthedataset. b)thesameexper- iment is repeated with noisier training data. again, the training error reduces to zero, although it now takes almost as many parameters as training points to memorizethedataset. thetesterrorshowsthepredictedbias/variancetrade-off; itdecreasesasthecapacityincreasesbutthenincreasesagainaswenearthepoint wherethetrainingdataisexactlymemorized. however,itsubsequentlydecreases againandultimatelyreachesabetterperformancelevel. thisisknownasdouble descent. depending on the loss, the model, and the amount of noise in the data, the double descent pattern can be seen to a greater or lesser degree across many datasets. c)resultsonmnist(withoutlabelnoise)withshallowneuralnetwork from belkin et al. (2019). d) results on cifar-100 with resnet18 network (see chapter 11) from nakkiran et al. (2021). see original papers for details. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.8.4 double descent 131 figure 8.11increasingcapacity(hiddenunits)allowssmootherinterpolationbe- tween sparse data points. a) consider this situation where the training data (orange circles) are sparse; there is a large region in the center with no data ex- amplestoconstrainthemodeltomimicthetruefunction(blackcurve). b)ifwe fitamodelwithjustenoughcapacitytofitthetrainingdata(cyancurve),thenit hastocontortitselftopassthroughthetrainingdata,andtheoutputpredictions will not be smooth. c–f) however, as we add more hidden units, the model has the ability to interpolate between the points more smoothly (smoothest possible curve plotted in each case). however, unlike in this figure, it is not obliged to. this argument is plausible. it’s certainly true that as we add more capacity to the model,itwillhavethecapabilitytocreatesmootherfunctions. figures8.11b–fshowthe smoothest possible functions that still pass through the data points as we increase the number of hidden units. when the number of parameters is very close to |
the number of training data examples (figure 8.11b), the model is forced to contort itself to fit the training data exactly, resulting in erratic predictions. this explains why the peak in the doubledescentcurveissopronounced. asweaddmorehiddenunits, themodelhasthe ability to construct smoother functions that are likely to generalize better to new data. however,thisdoesnotexplainwhyover-parameterizedmodelsshouldproducesmooth functions. figure8.12showsthreefunctionsthatcanbecreatedbythesimplifiedmodel with 50 hidden units. in each case, the model fits the data exactly, so the loss is zero. if the modern regime of double descent is explained by increasing smoothness, then what exactly is encouraging this smoothness? draft: please send errata to [email protected] 8 measuring performance figure 8.12 regularization. a–c) each of the three fitted curves passes through the data points exactly, so the training loss for each is zero. however, we might expectthesmoothcurveinpanel(a)togeneralizemuchbettertonewdatathan the erratic curves in panels (b) and (c). any factor that biases a model toward a subset of the solutions with a similar training loss is known as a regularizer. it is thought that the initialization and/or fitting of neural networks have an implicitregularizingeffect. consequently,intheover-parameterizedregime,more reasonable solutions, such as that in panel (a), are encouraged. the answer to this question is uncertain, but there are two likely possibilities. first, thenetworkinitializationmayencouragesmoothness, andthemodelneverdepartsfrom the sub-domain of smooth function during the training process. second, the training algorithm may somehow “prefer” to converge to smooth functions. any factor that biasesasolutiontowardasubsetofequivalentsolutionsisknownasaregularizer,soone possibility is that the training algorithm acts as an implicit regularizer (see section 9.2). 8.5 choosing hyperparameters in the previous section, we discussed how test performance changes with model capac- ity. unfortunately,intheclassicalregime, wedon’thaveaccesstoeitherthebias(which requiresknowledgeofthetrueunderlyingfunction)orthevariance(whichrequiresmul- tiple independently sampled datasets to estimate). in the modern regime, there is no way to tell how much capacity should be added before the test error stops improving. this raises the question of exactly how we should choose model capacity in practice. for a deep network, the model capacity depends on the numbers of hidden layers and hidden units per layer as well as other aspects of architecture that we have yet to introduce. furthermore, the choiceof learning algorithm and anyassociated parameters (learning rate, etc.) also affects the test performance. these elements are collectively termed hyperparameters. the process of finding the best hyperparameters is termed hyperparametersearchor(whenfocusedonnetworkstructure)neuralarchitecturesearch. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.8.6 summary 133 hyperparametersaretypicallychosenempirically; wetrainmanymodelswithdiffer- enthyperparametersonthesametrainingset,measuretheirperformance,andretainthe best model. however, we do not measure their performance on the test set; this would admit the possibility that these hyperparameters just happen to work well for the test set but don’t generalize to further data. instead, we introduce a third dataset known as a validation set. for every choice of hyperparameters, we train the associated model using the training set and evaluate performance on the validation set. finally, we select the model that worked best on the validation set and measure its performance on the test set. in principle, this should give a reasonable estimate of the true performance. the hyperparameter space is generally smaller than the parameter space but still too large to try every combination exhaustively. unfortunately, many hyperparameters are discrete (e.g., the number of hidden layers), and others may be conditional on one another (e.g., we only need to specify the number of hidden units in the tenth hidden layeriftherearetenormorelayers). hence,wecannotrelyongradientdescentmethods as we did for learning the model parameters. hyperparameter optimization algorithms intelligently sample the space of hyper |
parameters, contingent on previous results. this procedureiscomputationallyexpensivesincewemusttrainanentiremodelandmeasure the validation performance for each combination of hyperparameters. 8.6 summary tomeasureperformance,weuseaseparatetestset. thedegreetowhichperformanceis maintained on this test set is known as generalization. test errors can be explained by threefactors: noise,bias,andvariance. thesecombineadditivelyinregressionproblems with least squares losses. adding training data decreases the variance. when the model capacity is less than the number of training examples, increasing the capacity decreases bias but increases variance. this is known as the bias-variance trade-off, and there is a capacity where the trade-off is optimal. however, this is balanced against a tendency for performance to improve with ca- pacity, even when the parameters exceed the training examples. together, these two phenomena create the double descent curve. it is thought that the model interpolates more smoothly between the training data points in the over-parameterized “modern regime,”althoughitisunclearwhatdrivesthis. tochoosethecapacityandothermodel and training algorithm hyperparameters, we fit multiple models and evaluate their per- formance using a separate validation set. notes bias-variance trade-off: we showed that the test error for regression problems with least squares loss decomposes into the sum of noise, bias, and variance terms. these factors are all present for models with other losses, but their interaction is typically more complicated (friedman,1997;domingos,2000). forclassificationproblems,therearesomecounter-intuitive draft: please send errata to [email protected] 8 measuring performance predictions; for example, if the model is biased toward selecting the wrong class in a region of the input space, then increasing the variance can improve the classification rate as this pushes some of the predictions over the threshold to be classified correctly. cross-validation: we saw that it is typical to divide the data into three parts: training data (which is used to learn the model parameters), validation data (which is used to choose the hyperparameters), and test data (which is used to estimate the final performance). this approach is known as cross-validation. however, this division may cause problems where the total number of data examples is limited; if the number of training examples is comparable to the model capacity, then the variance will be large. one way to mitigate this problem is to use k-fold cross-validation. the training and validation data are partitioned into k disjoint subsets. for example, we might divide these data into five parts. we train with four and validate with the fifth for each of the five permutations and choose the hyperparameters based on the average validation performance. the final test performanceisassessedusingtheaverageofthepredictionsfromthefivemodelswiththebest hyperparameters on an entirely different test set. there are many variations of this idea, but all share the general goal of using a larger proportion of the data to train the model, thereby reducing variance. capacity: we have used the term capacity informally to mean the number of parameters or hidden units in the model (and hence indirectly, the ability of the model to fit functions of increasingcomplexity). therepresentationalcapacityofamodeldescribesthespaceofpossible functions it can construct when we consider all possible parameter values. when we take into accountthefactthatanoptimizationalgorithmmaynotbeabletoreachallofthesesolutions, what is left is the effective capacity. the vapnik-chervonenkis (vc) dimension (vapnik & chervonenkis, 1971) is a more formal measure of capacity. it is the largest number of training examples that a binary classifier can label arbitrarily. bartlett et al. (2019) derive upper and lower bounds for the vc dimension in termsofthenumberoflayersandweights. analternativemeasureofcapacityistherademacher complexity,whichistheexpectedempiricalperformanceofaclassificationmodel(withoptimal parameters) for data with random labels. neyshabur et al. (2017) derive a lower bound on the generalization error in terms of the rademacher complexity. double descent: theterm“doubledescent”wascoinedbybelkinetal.(2019),whodemon- stratedthatthetesterrordecreasesagaininthe |
over-parameterizedregimefortwo-layerneural networks and random features. they also claimed that this occurs in decision trees, although buschjäger & morik (2021) subsequently provided evidence to the contrary. nakkiran et al. (2021) show that double descent occurs for various modern datasets (cifar-10, cifar-100, iwslt’14de-en),architectures(cnns,resnets,transformers),andoptimizers(sgd,adam). thephenomenonismorepronouncedwhennoiseisaddedtothetargetlabels(nakkiranetal., 2021) and when some regularization techniques are used (ishida et al., 2020). nakkiranetal.(2021)alsoprovideempiricalevidencethattestperformancedependsoneffective modelcapacity(thelargestnumberofsamplesforwhichagivenmodelandtrainingmethodcan achievezerotrainingerror). atthispoint,themodelstartstodevoteitseffortstointerpolating smoothly. assuch,thetestperformancedependsnotjustonthemodelbutalsoonthetraining algorithmandlengthoftraining. theyobservethesamepatternwhentheystudyamodelwith fixedcapacityandincreasethenumberoftrainingiterations. theytermthisepoch-wisedouble descent. this phenomenon has been modeled by pezeshki et al. (2022) in terms of different features in the model being learned at different speeds. double descent makes the rather strange prediction that adding training data can sometimes worsentestperformance. consideranover-parameterizedmodelintheseconddescendingpart this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 135 of the curve. if we increase the training data to match the model capacity, we will now be in the critical region of the new test error curve, and the test loss may increase. bubeck&sellke(2021)provethatoverparameterizationisnecessarytointerpolatedatasmoothly in high dimensions. they demonstrate a trade-off between the number of parameters and the appendixb.1.1 lipschitz constant of a model (the fastest the output can change for a small input change). a lipschitzconstant review of the theory of over-parameterized machine learning can be found in dar et al. (2021). curseofdimensionality: asdimensionalityincreases,thevolumeofspacegrowssofastthat the amount of data needed to densely sample it increases exponentially. this phenomenon is knownasthecurseofdimensionality. high-dimensionalspacehasmanyunexpectedproperties, and caution should be used when trying to reason about it based on low-dimensional exam- ples. this book visualizes many aspects of deep learning in one or two dimensions, but these visualizations should be treated with healthy skepticism. surprisingpropertiesofhigh-dimensionalspacesinclude: (i)tworandomlysampleddatapoints from a standard normal distribution are very close to orthogonal to one another (relative to problems8.6–8.9 the origin) with high likelihood. (ii) the distance from the origin of samples from a standard normal distribution is roughly constant. (iii) most of a volume of a high-dimensional sphere (hypersphere)isadjacenttoitssurface(acommonmetaphoristhatmostofthevolumeofahigh- dimensionalorangeisinthepeel,notinthepulp). (iv)ifweplaceaunit-diameterhypersphere insideahypercubewithunit-lengthsides,thenthehyperspheretakesupadecreasingproportion of the volume of the cube as the dimension increases. since the volume of the cube is fixed at notebook8.4 sizeone,thisimpliesthatthevolumeofahigh-dimensionalhyperspherebecomesclosetozero. high-dimensional (v)forrandompointsdrawnfromauniformdistributioninahigh-dimensionalhypercube,the spaces ratio of the euclidean distance between the nearest and furthest points becomes close to one. for further information, consult beyer et al. (1999) and aggarwal et al. (2001). real-worldperformance: inthischapter,wearguedthatmodelperformancecouldbeevalu- atedusingaheld-outtestset. |
however,theresultwon’tbeindicativeofreal-worldperformance if the statistics of the test set don’t match those of real-world data. moreover, the statistics of real-world data may change over time, causing the model to become increasingly stale and performancetodecrease. thisisknownasdata driftandmeansthatdeployedmodelsmustbe carefully monitored. there are three main reasons why real-world performance may be worse than the test perfor- mance implies. first, the statistics of the input data x may change; we may now be observing parts of the function that were sparsely sampled or not sampled at all during training. this is known as covariate shift. second, the statistics of the output data y may change; if some output values are infrequent during training, then the model may learn not to predict these in ambiguous situations and will make mistakes if they are more common in the real world. this is known as prior shift. third, the relationship between input and output may change. this is known as concept shift. these issues are discussed in moreno-torres et al. (2012). hyperparameter search: finding the best hyperparameters is a challenging optimization task. testing a single configuration of hyperparameters is expensive; we must train an entire model and measure its performance. we have no easy way to access the derivatives (i.e., how performance changes when we make a small change to a hyperparameter). moreover, many of thehyperparametersarediscrete,sowecannotusegradientdescentmethods. therearemultiple local minima and no way to tell if we are close to the global minimum. the noise level is high since each training/validation cycle uses a stochastic training algorithm; we expect different results if we train a model twice with the same hyperparameters. finally, some variables are conditional and only exist if others are set. for example, the number of hidden units in the third hidden layer is only relevant if we have at least three hidden layers. draft: please send errata to [email protected] 8 measuring performance a simple approach is to sample the space randomly (bergstra & bengio, 2012). however, for continuous variables, it is better to build a model of performance as a function of the hyperparameters and the uncertainty in this function. this can be exploited to test where the uncertaintyisgreat(explorethespace)orhomeinonregionswhereperformancelookspromising (exploitpreviousknowledge). bayesianoptimizationisaframeworkbasedongaussianprocesses that does just this, and its application to hyperparameter search is described in snoek et al. (2012). the beta-bernoulli bandit (see lattimore & szepesvári, 2020) is a roughly equivalent model for describing uncertainty in results due to discrete variables. thesequentialmodel-basedconfiguration(smac)algorithm(hutteretal.,2011)cancopewith continuous,discrete,andconditionalparameters. thebasicapproachistousearandomforest to model the objective function where the mean of the tree predictions is the best guess about the objective function, and their variance represents the uncertainty. a completely different approachthatcanalsocopewithcombinationsofcontinuous,discrete,andconditionalparam- eters is tree-parzen estimators (bergstra et al., 2011). the previous methods modeled the probability of the model performance given the hyperparameters. in contrast, the tree-parzen estimator models the probability of the hyperparameters given the model performance. hyperband(lietal.,2017b)isamulti-armedbanditstrategyforhyperparameteroptimization. itassumesthattherearecomputationallycheapbutapproximatewaystomeasureperformance (e.g., by not training to completion) and that these can be associated with a budget (e.g., by trainingforafixednumberofiterations). anumberofrandomconfigurationsaresampledand run until the budget is used up. then the best fraction η of runs is kept, and the budget is multiplied by 1/η. this is repeated until the maximum budget is reached. this approach has theadvantageofefficiency;forbadconfigurations,itdoesnotneedtoruntheexperimenttothe end. however, each sample is just chosen randomly, which is inefficient. the bohb algorithm (falkner et al., 2018) combines the eff |
iciency of hyperband with the more sensible choice of hyperparameters from tree parzen estimators to construct an even better method. problems problem8.1willthemulticlasscross-entropytraininglossinfigure8.2everreachzero? explain your reasoning. problem 8.2 whatvaluesshouldwechooseforthethreeweightsandbiasesinthefirstlayerof the model in figure 8.4a so that the hidden unit’s responses are as depicted in figures 8.4b–d? problem 8.3∗ given a training dataset consisting of i input/output pairs {x ,y }, show how i i the parameters {β,ω ,ω ,ω } for the model in figure 8.4a using the least squares loss function 1 2 3 can be found in closed form. problem 8.4 consider the curve in figure 8.10b at the point where we train a model with a hiddenlayerofsize200,whichwouldhave50,410parameters. whatdoyoupredictwillhappen tothetrainingandtestperformanceifweincreasethenumberoftrainingexamplesfrom10,000 to 50,410? problem 8.5 consider the case where the model capacity exceeds the number of training data points, and the model is flexible enough to reduce the training loss to zero. what are the implications of this for fitting a heteroscedastic model? propose a method to resolve any problems that you identify. problem 8.6 show that two random points drawn from a 1000-dimensional standard gaussian distribution are orthogonal relative to the origin with high probability. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press. |
notes 137 figure 8.13 typical sets. a) standard normal distribution in two dimensions. circles are four samples from this distribution. as the distance from the cen- ter increases, the probability decreases, but the volume of space at that radius (i.e., the area between adjacent evenly spaced circles) increases. b) these fac- tors trade off so that the histogram of distances of samples from the center has a pronounced peak. c) in higher dimensions, this effect becomes more extreme, andtheprobabilityofobservingasampleclosetothemeanbecomesvanishingly small. although the most likely point is at the mean of the distribution, the typical samples are found in a relatively narrow shell. problem 8.7 the volume of a hypersphere with radius r in d dimensions is: rdπd/2 vol[r]= , (8.8) γ[d/2+1] appendixb.1.3 where γ[•] is the gamma function. show using stirling’s formula that the volume of a hyper- gammafunction sphere of diameter one (radius r=0.5) becomes zero as the dimension increases. appendixb.1.4 problem 8.8∗ consider a hypersphere of radius r = 1. find an expression for the proportion stirling’sformula of the total volume that lies in the outermost 1% of the distance from the center (i.e., in the outermost shell of thickness 0.01). show that this becomes one as the dimension increases. problem 8.9 figure 8.13c shows the distribution of distances of samples of a standard normal distribution as the dimension increases. empirically verify this finding by sampling from the standard normal distributions in 25, 100, and 500 dimensions and plotting a histogram of the distancesfromthecenter. whatclosed-formprobabilitydistributiondescribesthesedistances? draft: please send errata to [email protected] 9 regularization chapter 8 described how to measure model performance and identified that there could beasignificantperformancegapbetweenthetrainingandtestdata. possiblereasonsfor this discrepancy include: (i) the model describes statistical peculiarities of the training data that are not representative of the true mapping from input to output (overfitting), and (ii) the model is unconstrained in areas with no training examples, leading to sub- optimal predictions. this chapter discusses regularization techniques. these are a family of methods that reduce the generalization gap between training and test performance. strictly speaking, regularization involves adding explicit terms to the loss function that favor certain pa- rameter choices. however, in machine learning, this term is commonly used to refer to any strategy that improves generalization. we start by considering regularization in its strictest sense. then we show how the stochastic gradient descent algorithm itself favors certain solutions. this is known as implicit regularization. following this, we consider a set of heuristic methods that improve test performance. these include early stopping, ensembling, dropout, label smoothing, and transfer learning. 9.1 explicit regularization consider fitting a model f[x,ϕ] with parameters ϕ using a training set {x ,y } of in- i i put/output pairs. we seek the minimum of the loss function l[ϕ]: (cid:2) (cid:3) ϕˆ = argmin l[ϕ] ϕ " # xi = argmin ℓ [x ,y ] , (9.1) i i i ϕ i=1 where the individual terms ℓ [x ,y ] measure the mismatch between the network pre- i i i dictions f[x ,ϕ] and output targets y for each training pair. to bias this minimization i i this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.9.1 explicit regularization 139 figure 9.1 explicit regularization. a) loss function for gabor model (see sec- tion6.1.2). cyancirclesrepresentlocalminima. graycirclerepresentstheglobal minimum. b)theregularizationtermfavorsparametersclosetothecenterofthe plot by adding an increasing penalty as we move away from this point. c) the final loss function is the sum of the original loss function plus the regularization term. this surface has fewer local minima, and the |
global minimum has moved to a different position (arrow shows change). toward certain solutions, we include an additional term: " # xi ϕˆ =argmin ℓ [x ,y ]+λ·g[ϕ] , (9.2) i i i ϕ i=1 where g[ϕ] is a function that returns a scalar that takes a larger value when the pa- rameters are less preferred. the term λ is a positive scalar that controls the relative contribution of the original loss function and the regularization term. the minima of the regularized loss function usually differ from those in the original, so the training procedure converges to different parameter values (figure 9.1). 9.1.1 probabilistic interpretation regularization can be viewed from a probabilistic perspective. section 5.1 shows how loss functions are constructed from the maximum likelihood criterion: " # yi ϕˆ =argmax pr(y |x ,ϕ) . (9.3) i i ϕ i=1 the regularization term can be considered as a prior pr(ϕ) that represents knowledge about the parameters before we observe the data and we now have the maximum a posteriori or map criterion: draft: please send errata to [email protected] 9 regularization " # yi ϕˆ =argmax pr(y |x ,ϕ)pr(ϕ) . (9.4) i i ϕ i=1 movingbacktothenegativelog-likelihoodlossfunctionbytakingthelogandmultiplying by minus one, we see that λ·g[ϕ]=−log[pr(ϕ)]. 9.1.2 l2 regularization this discussion has sidestepped the question of which solutions the regularization term should penalize (or equivalently that the prior should favor). since neural networks are used in an extremely broad range of applications, these can only be very generic pref- erences. the most commonly used regularization term is the l2 norm, which penalizes the sum of the squares of the parameter values: 2 3 xi x ϕˆ =argmin4 ℓ [x ,y ]+λ ϕ25, (9.5) i i i j ϕ i=1 j where j indexes the parameters. this is also referred to as tikhonov regularization or problems9.1–9.2 ridge regression, or (when applied to matrices) frobenius norm regularization. for neural networks, l2 regularization is usually applied to the weights but not the biases and is hence referred to as a weight decay term. the effect is to encourage smaller weights, so the output function is smoother. to see this, consider that the output prediction is a weighted sum of the activations at the last hidden layer. if the notebook9.1 weights have a smaller magnitude, the output will vary less. the same logic applies to l2regularization the computation of the pre-activations at the last hidden layer and so on, progressing backward through the network. in the limit, if we forced all the weights to be zero, the network would produce a constant output determined by the final bias parameter. figure9.2showstheeffectoffittingthesimplifiednetworkfromfigure8.4withweight decay and different values of the regularization coefficient λ. when λ is small, it has little effect. however, as λ increases, the fit to the data becomes less accurate, and the function becomes smoother. this might improve the test performance for two reasons: • if the network is overfitting, then adding the regularization term means that the network must trade off slavish adherence to the data against the desire to be smooth. onewaytothinkaboutthisisthattheerrorduetovariancereduces(the model no longer needs to pass through every data point) at the cost of increased bias (the model can only describe smooth functions). • when the network is over-parameterized, some of the extra model capacity de- scribes areas with no training data. here, the regularization term will favor func- tions that smoothly interpolate between the nearby points. this is reasonable behavior in the absence of knowledge about the true function. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.9.2 implicit regularization 141 figure 9. |
which object it contains. in each case, the model returns a vector of size n that contains the probabilities of the n categories. 1.1.2 inputs the input data in figure 1.2 varies widely. in the house pricing example, the input is a fixed-length vector containing values that characterize the property. this is an example of tabular data because it has no internal structure; if we change the order of the inputs and build a new model, then we expect the model prediction to remain the same. conversely, the input in the restaurant review example is a body of text. this may be of variable length depending on the number of words in the review, and here input this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.1.1 supervised learning 3 figure1.2regressionandclassificationproblems. a)thisregressionmodeltakes a vector of numbers that characterize a property and predicts its price. b) this multivariate regression model takes the structure of a chemical molecule and predictsitsmeltingandboilingpoints. c)thisbinaryclassificationmodeltakesa restaurantreviewandclassifiesitaseitherpositiveornegative. d)thismulticlass classificationproblemassignsasnippetofaudiotooneofn genres. e)asecond multiclassclassificationprobleminwhichthemodelclassifiesanimageaccording to which of n possible objects it might contain. draft: please send errata to [email protected] 1 introduction figure1.3machinelearningmodel. themodelrepresentsafamilyofrelationships thatrelatetheinput(ageofchild)totheoutput(heightofchild). theparticular relationship is chosen using training data, which consists of input/output pairs (orange points). when we train the model, we search through the possible re- lationships for one that describes the data well. here, the trained model is the cyan curve and can be used to compute the height for any age. order is important; my wife ate the chicken is not the same as the chicken ate my wife. the text must be encoded into numerical form before passing it to the model. here, we use a fixed vocabulary of size 10,000 and simply concatenate the word indices. forthemusicclassificationexample, theinputvectormightbeoffixedsize(perhaps a 10-second clip) but is very high-dimensional. digital audio is usually sampled at 44.1 khzandrepresentedby16-bitintegers, soaten-secondclipconsistsof441,000integers. clearly, supervised learning models will have to be able to process sizeable inputs. the inputintheimageclassificationexample(whichconsistsoftheconcatenatedrgbvalues at every pixel) is also enormous. moreover, its structure is naturally two-dimensional; twopixelsaboveandbelowoneanotherarecloselyrelated, eveniftheyarenotadjacent in the input vector. finally,considertheinputforthemodelthatpredictsthemeltingandboilingpoints ofthemolecule. amoleculemaycontainvaryingnumbersofatomsthatcanbeconnected in different ways. in this case, the model must ingest both the geometric structure of the molecule and the constituent atoms to the model. 1.1.3 machine learning models untilnow,wehavetreatedthemachinelearningmodelasablackboxthattakesaninput vector and returns an output vector. but what exactly is in this black box? consider a model to predict the height of a child from their age (figure 1.3). the machine learning this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.1.1 supervised learning 5 model is a mathematical equation that describes how the average height varies as a function of age (cyan curve in figure 1.3). when we run the age through this equation, itreturnstheheight. forexample, iftheageis10years, thenwepredictthattheheight will be 139 cm. more precisely, the model represents a family of equations mapping the input to the output (i.e., a family of different cyan curves). the particular equation (curve) is chosen using training data (examples of input/output pairs). in figure 1.3, these pairs arerepresentedbytheorangepoints,andwecanseethatthemodel(cyanline)describes thesedatareasonably. whenwetalkabouttrainingorfittingamodel, wemeanthatwe |
2 l2 regularization in simplified network (see figure 8.4). a–f) fitted functions as we increase the regularization coefficient λ. the black curve is the true function, the orange circles are the noisy training data, and the cyan curve is the fitted model. for small λ (panels a–b), the fitted function passes exactly throughthedatapoints. forintermediateλ(panelsc–d),thefunctionissmoother andmoresimilartothegroundtruth. forlargeλ(panelse–f),thefittedfunction is smoother than the ground truth, so the fit is worse. 9.2 implicit regularization an intriguing recent finding is that neither gradient descent nor stochastic gradient descent moves neutrally to the minimum of the loss function; each exhibits a preference for some solutions over others. this is known as implicit regularization. 9.2.1 implicit regularization in gradient descent consider a continuous version of gradient descent where the step size is infinitesimal. the change in parameters ϕ will be governed by the differential equation: ∂ϕ ∂l =− . (9.6) ∂t ∂ϕ gradient descent approximates this process with a series of discrete steps of size α: draft: please send errata to [email protected] 9 regularization figure9.3implicitregularizationingradientdescent. a)lossfunctionwithfamily ofglobalminimaonhorizontallineϕ =0.61. dashedbluelineshowscontinuous 1 gradient descent path starting in bottom-left. cyan trajectory shows discrete gradient descent with step size 0.1 (first few steps shown explicitly as arrows). thefinitestepsizecausesthepathstodivergeandreachadifferentfinalposition. b) this disparity can be approximated by adding a regularization term to the continuous gradient descent loss function that penalizes the squared gradient magnitude. c) after adding this term, the continuous gradient descent path converges to the same place that the discrete one did on the original function. ∂l[ϕ ] ϕ =ϕ −α t , (9.7) t+1 t ∂ϕ the discretization causes a deviation from the continuous path (figure 9.3). this deviation can be understood by deriving a modified loss term l˜ for the contin- uous case that arrives at the same place as the discretized version on the original loss l. it can be shown (see end of chapter) that this modified loss is: (cid:13) (cid:13) l˜gd[ϕ]=l[ϕ]+ α4 (cid:13)(cid:13)(cid:13)∂∂ϕl(cid:13)(cid:13)(cid:13)2. (9.8) in other words, the discrete trajectory is repelled from places where the gradient norm is large (the surface is steep). this doesn’t change the position of the minima where the gradients are zero anyway. however, it changes the effective loss function elsewhere and modifiestheoptimizationtrajectory,whichpotentiallyconvergestoadifferentminimum. implicit regularization due to gradient descent may be responsible for the observation that full batch gradient descent generalizes better with larger step sizes (figure 9.5a). this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.9.2 implicit regularization 143 figure 9.4 implicit regularization for stochastic gradient descent. a) original loss function for gabor model (section 6.1.2). b) implicit regularization term from gradient descent penalizes the squared gradient magnitude. c) additional implicit regularization from stochastic gradient descent penalizes the variance of the batch gradients. d) modified loss function (sum of original loss plus two implicit regularization components). draft: please send errata to [email protected] 9 regularization 9.2.2 implicit regularization in stochastic gradient descent asimilaranalysiscanbeappliedtostochasticgradientdescent. nowweseekamodified loss function such that the continuous version reaches the same place as the average of the possible random sgd updates. this can be shown to be: (cid:13) (cid:13) l˜sgd[ϕ] = l˜gd[� |
�]+ 4αb xb (cid:13)(cid:13)(cid:13)∂∂lϕb − ∂∂ϕl(cid:13)(cid:13)(cid:13)2 b=1 (cid:13) (cid:13) (cid:13) (cid:13) = l[ϕ]+ α(cid:13)(cid:13)(cid:13)∂l(cid:13)(cid:13)(cid:13)2+ α xb (cid:13)(cid:13)(cid:13)∂lb − ∂l(cid:13)(cid:13)(cid:13)2. (9.9) 4 ∂ϕ 4b ∂ϕ ∂ϕ b=1 here, l is the loss for the bth of the b batches in an epoch, and both l and l now b b represent the means of the i individual losses in the full dataset and the |b| individual losses in the batch, respectively: xi x 1 1 l= ℓ [x ,y ] and l = ℓ [x ,y ]. (9.10) i i i i b |b| i i i i=1 i∈bb equation 9.9 reveals an extra regularization term, which corresponds to the variance of the gradients of the batch losses l . in other words, sgd implicitly favors places b wherethegradientsarestable(whereallthebatchesagreeontheslope). oncemore,this modifies the trajectory of the optimization process (figure 9.4) but does not necessarily change the position of the global minimum; if the model is over-parameterized, then it may fit all the training data exactly, so all of these gradient terms will all be zero at the global minimum. sgd generalizes better than gradient descent, and smaller batch sizes generally per- form better than larger ones (figure 9.5b). one possible explanation is that the inherent randomness allows the algorithm to reach different parts of the loss function. however, notebook9.2 it’s also possible that some or all of this performance increase is due to implicit regular- implicit regularization ization; this encourages solutions where all the data fits well (so the batch variance is small)ratherthansolutionswheresomeofthedatafitextremelywellandotherdataless well (perhaps with the same overall loss, but with larger batch variance). the former solutions are likely to generalize better. 9.3 heuristics to improve performance we’ve seen that adding explicit regularization terms encourages the training algorithm to find a good solution by adding extra terms to the loss function. this also occurs implicitly as an unintended (but seemingly helpful) byproduct of stochastic gradient descent. this section describes other heuristic methods used to improve generalization. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.9.3 heuristics to improve performance 145 figure 9.5 effect of learning rate and batch size for 4000 training and 4000 test examples from mnist-1d (see figure 8.1) for a neural network with two hidden layers. a) performance is better for large learning rates than for intermediate or small ones. in each case, the number of iterations is 6000× the learning rate, so each solution has the opportunity to move the same distance. b) performance is superiorforsmallerbatchsizes. ineachcase,thenumberofiterationswaschosen so that the training data were memorized at roughly the same model capacity. 9.3.1 early stopping early stopping refers to stopping the training procedure before it has fully converged. this can reduce overfitting if the model has already captured the coarse shape of the underlying function but has not yet had time to overfit to the noise (figure 9.6). one way of thinking about this is that since the weights are initialized to small values (see section7.5),theysimplydon’thavetimetobecomelarge,soearlystoppinghasasimilar effect to explicit l2 regularization. a different view is that early stopping reduces the effectivemodelcomplexity. hence,wemovebackdownthebias/variancetrade-offcurve from the critical region, and performance improves (see figures 8.9 and 8.10). earlyst |
oppinghasasinglehyperparameter,thenumberofstepsafterwhichlearning is terminated. as usual, this is chosen empirically using a validation set (section 8.5). however, for early stopping, the hyperparameter can be selected without the need to train multiple models. the model is trained once, the performance on the validation set ismonitoredeveryt iterations,andtheassociatedmodelsarestored. thestoredmodel where the validation performance was best is selected. 9.3.2 ensembling another approach to reducing the generalization gap between training and test data is to build several models and average their predictions. a group of such models is known draft: please send errata to [email protected] 9 regularization figure 9.6 early stopping. a) simplified shallow network model with 14 linear regions (figure 8.4) is initialized randomly (cyan curve) and trained with sgd using a batch size of five and a learning rate of 0.05. b–d) as training proceeds, thefunctionfirstcapturesthecoarsestructureofthetruefunction(blackcurve) before e–f) overfitting to the noisy training data (orange points). although the traininglosscontinuestodecreasethroughoutthisprocess,thelearnedmodelsin panels(c)and(d)areclosesttothetrueunderlyingfunction. theywillgeneralize better on average to test data than those in panels (e) or (f). asanensemble. thistechniquereliablyimprovestestperformanceatthecostoftraining and storing multiple models and performing inference multiple times. the models can be combined by taking the mean of the outputs (for regression problems) or the mean of the pre-softmax activations (for classification problems). the assumption is that model errors are independent and will cancel out. alternatively, we can take the median of the outputs (for regression problems) or the most frequent predicted class (for classification problems) to make the predictions more robust. onewaytotraindifferentmodelsisjusttousedifferentrandominitializations. this may help in regions of input space far from the training data. here, the fitted function notebook9.3 is relatively unconstrained, and different models may produce different predictions, so ensembling the average of several models may generalize better than any single model. a second approach is to generate several different datasets by re-sampling the train- ing data with replacement and training a different model from each. this is known as bootstrap aggregating or bagging for short (figure 9.7). it has the effect of smoothing out the data; if a data point is not present in one training set, the model will interpo- this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.9.3 heuristics to improve performance 147 figure 9.7 ensemble methods. a) fitting a single model (gray curve) to the entiredataset(orangepoints). b–e)fourmodelscreatedbyre-samplingthedata with replacement (bagging) four times (size of orange point indicates number of timesthedatapointwasre-sampled). f)whenweaveragethepredictionsofthis ensemble, the result (cyan curve) is smoother than the result from panel (a) for the full dataset (gray curve) and will probably generalize better. late from nearby points; hence, if that point was an outlier, the fitted function will be more moderate in this region. other approaches include training models with different hyperparameters or training completely different families of models. 9.3.3 dropout dropout randomly clamps a subset (typically 50%) of hidden units to zero at each iter- ation of sgd (figure 9.8). this makes the network less dependent on any given hidden unit and encourages the weights to have smaller magnitudes so that the change in the function due to the presence or absence of the hidden unit is reduced. this technique has the positive benefit that it can eliminate undesirable “kinks” in the function that are far from the training data and don’t affect the loss. for example, consider three hidden units that become active sequentially as we move along the curve (figure9.9a). thefirsthiddenunitcausesalargeincreaseintheslope. asecondhidden draft: please send errata to [email protected] 9 regularization figure 9.8 dropout. a) original network. b–d) at each training iteration |
, a random subset of hidden units is clamped to zero (gray nodes). the result is thattheincomingandoutgoingweightsfromtheseunitshavenoeffect,soweare training with a slightly different network each time. unit decreases the slope, so the function goes back down. finally, the third unit cancels out this decrease and returns the curve to its original trajectory. these three units conspire to make an undesirable local change in the function. this will not change the training loss but is unlikely to generalize well. whenseveralunitsconspireinthisway,eliminatingone(aswouldhappenindropout) causes a considerable change to the output function that is propagated to the half-space wherethatunitwasactive(figure9.9b). asubsequentgradientdescentstepwillattempt tocompensateforthechangethatthisinduces,andsuchdependencieswillbeeliminated over time. the overall effect is that large unnecessary changes between training data pointsaregraduallyremovedeventhoughtheycontributenothingtotheloss(figure9.9). at test time, we can run the network as usual with all the hidden units active; however, the network now has more hidden units than it was trained with at any given iteration,sowemultiplytheweightsbyoneminusthedropoutprobabilitytocompensate. this is known as the weight scaling inference rule. a different approach to inference is to use monte carlo dropout, in which we run the network multiple times with different random subsets of units clamped to zero (as in training) and combine the results. this iscloselyrelatedtoensemblinginthateveryrandomversionofthenetworkisadifferent model; however, we do not have to train or store multiple networks here. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.9.3 heuristics to improve performance 149 figure 9.9 dropout mechanism. a) an undesirable kink in the curve is caused by a sequential increase in the slope, decrease in the slope (at circled joint), and then another increase to return the curve to its original trajectory. here we are using full-batch gradient descent, and the model already fits the data as well as possible,sofurthertrainingwon’tremovethekink. b)considerwhathappensif we remove the hidden unit that produced the circled joint in panel (a), as might happen using dropout. without the decrease in the slope, the right-hand side of the function takes an upwards trajectory, and a subsequent gradient descent step will aim to compensate for this change. c) curve after 2000 iterations of (i) randomly removing one of the three hidden units that cause the kink and (ii) performing a gradient descent step. the kink does not affect the loss but is nonetheless removed by this approximation of the dropout mechanism. 9.3.4 applying noise dropout can be interpreted as applying multiplicative bernoulli noise to the network activations. thisleadstotheideaofapplyingnoisetootherpartsofthenetworkduring training to make the final model more robust. one option is to add noise to the input data; this smooths out the learned function problem9.3 (figure 9.10). for regression problems, it can be shown to be equivalent to adding a regularizing term that penalizes the derivatives of the network’s output with respect to itsinput. anextremevariantisadversarialtraining,inwhichtheoptimizationalgorithm actively searches for small perturbations of the input that cause large changes to the output. these can be thought of as worst-case additive noise vectors. a second possibility is to add noise to the weights. this encourages the network to make sensible predictions even for small perturbations of the weights. the result is that thetrainingconvergestolocalminimainthemiddleofwide,flatregions,wherechanging the individual weights does not matter much. finally, we can perturb the labels. the maximum-likelihood criterion for multiclass classification aims to predict the correct class with absolute certainty (equation 5.24). to this end, the final network activations (i.e., before the softmax function) are pushed to very large values for the correct class and very small values for the wrong classes. we could discourage this overconfident behavior by assuming that a proportion ρ of draft: please send errata to [email protected] 9 regularization figure 9.10 adding noise to |
inputs. at each step of sgd, random noise with variance σ2 is added to the batch data. a–c) fitted model with different noise x levels (small dots represent ten samples). adding more noise smooths out the fitted function (cyan line). the training labels are incorrect and belong with equal probability to the other classes. thiscouldbedonebyrandomlychangingthelabelsateachtrainingiteration. however, the same end can be achieved by changing the loss function to minimize the cross- entropy between the predicted distribution and a distribution where the true label has problem9.4 probability 1−ρ, and the other classes have equal probability. this is known as label smoothing and improves generalization in diverse scenarios. 9.3.5 bayesian inference the maximum likelihood approach is generally overconfident; in the training phase, it selectsthemostlikelyparametersandbasesitspredictionsonthemodeldefinedbythese. however, many parameter values may be broadly compatible with the data and only slightly less likely. the bayesian approach treats the parameters as unknown variables appendixc.1.4 and computes a distribution pr(ϕ|{x ,y }) over these parameters ϕ conditioned on the bayes’rule i i training data {x ,y } using bayes’ rule: i i q i pr(y |x ,ϕ)pr(ϕ) pr(ϕ|{x ,y })= r qi=1 i i , (9.11) i i i pr(y |x ,ϕ)pr(ϕ)dϕ i=1 i i where pr(ϕ) is the prior probability of the parameters, and the denominator is a nor- malizing term. hence, every parameter choice is assigned a probability (figure 9.11). the prediction y for new input x is an infinite weighted sum (i.e., an integral) of the predictions for each parameter set, where the weights are the associated probabilities: z pr(y|x,{x ,y })= pr(y|x,ϕ)pr(ϕ|{x ,y })dϕ. (9.12) i i i i this is effectively an infinite weighted ensemble, where the weight depends on (i) the prior probability of the parameters and (ii) their agreement with the data. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.9.3 heuristics to improve performance 151 figure9.11bayesianapproachforsimplifiednetworkmodel(seefigure8.4). the parametersaretreatedasuncertain. theposteriorprobabilitypr(ϕ|{x ,y })for i i a set of parameters is determined by their compatibility with the data {x ,y } i i and a prior distribution pr(ϕ). a–c) two sets of parameters (cyan and gray curves) sampled from the posterior using normally distributed priors with mean zero and three variances. when the prior variance is small, the parameters also tend to be small, and the functions smoother. d–f) inference proceeds by taking a weighted sum over all possible parameter values where the weights are the posteriorprobabilities. thisproducesbothapredictionofthemean(cyancurves) and the associated uncertainty (gray region is two standard deviations). the bayesian approach is elegant and can provide more robust predictions than those that derive from maximum likelihood. unfortunately, for complex models like neural networks, there is no practical way to represent the full probability distribution notebook9.4 over the parameters or to integrate over it during the inference phase. consequently, all bayesian currentmethodsofthistypemakeapproximationsofsomekind,andtypicallytheseadd approach considerable complexity to learning and inference. 9.3.6 transfer learning and multi-task learning whentrainingdataarelimited,otherdatasetscanbeexploitedtoimproveperformance. in transfer learning (figure 9.12a), the network is pre-trained to perform a related sec- draft: please send errata to [email protected] 9 regularization ondary task for which data are more plentiful. the resulting model is then adapted to the original task. this is typically done by removing the last layer and adding one or more layers that produce a suitable output. the main model may be fixed, and the new layers trained for the original task, or we may fine-tune the entire model |
. the principle is that the network will build a good internal representation of the data from the secondary task, which can subsequently be exploited for the original task. equivalently, transfer learning can be viewed as initializing most of the parameters of thefinalnetworkinasensiblepartofthespacethatislikelytoproduceagoodsolution. multi-tasklearning(figure9.12b)isarelatedtechniqueinwhichthenetworkistrained to solve several problems concurrently. for example, the network might take an image andsimultaneouslylearntosegmentthescene,estimatethepixel-wisedepth,andpredict a caption describing the image. all of these tasks require some understanding of the image and, when learned simultaneously, the model performance for each may improve. 9.3.7 self-supervised learning theabovediscussionassumesthatwehaveplentifuldataforasecondarytaskordatafor multiple tasks to be learned concurrently. if not, we can create large amounts of “free” labeled data using self-supervised learning and use this for transfer learning. there are two families of methods for self-supervised learning: generative and contrastive. in generative self-supervised learning, part of each data example is masked, and the secondary task is to predict the missing part (figure 9.12c). for example, we might use a corpus of unlabeled images and a secondary task that aims to inpaint (fill in) missing partsoftheimage(figure9.12c). similarly,wemightusealargecorpusoftextandmask somewords. wetrainthenetworktopredictthemissingwordsandthenfine-tuneitfor the actual language task we are interested in (see chapter 12). incontrastiveself-supervisedlearning,pairsofexampleswithcommonalitiesarecom- pared to unrelated pairs. for images, the secondary task might be to identify whether a pairofimagesaretransformedversionsofoneanotherorareunconnected. fortext, the secondarytaskmightbetodeterminewhethertwosentencesfollowedoneanotherinthe original document. sometimes, the precise relationship between a connected pair must be identified (e.g., finding the relative position of two patches from the same image). 9.3.8 augmentation transfer learning improves performance by exploiting a different dataset. multi-task learning improves performance using additional labels. a third option is to expand the dataset. we can often transform each input data example in such a way that the label stays the same. for example, we might aim to determine if there is a bird in an image (figure 9.13). here, we could rotate, flip, blur, or manipulate the color balance of the image, and the label “bird” remains valid. similarly, for tasks where the input is text, notebook9.5 we can substitute synonyms or translate to another language and back again. for tasks augmentation where the input is audio, we can amplify or attenuate different frequency bands. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.9.3 heuristics to improve performance 153 figure 9.12 transfer, multi-task, and self-supervised learning. a) transfer learn- ing is used when we have limited labeled data for the primary task (here depth estimation)butplentifuldataforasecondarytask(heresegmentation). wetrain a model for the secondary task, remove the final layers, and replace them with new layers appropriate to the primary task. we then train only the new layers or fine-tune the entire network for the primary task. the network learns a good internalrepresentationfromthesecondarytaskthatisthenexploitedforthepri- marytask. b)inmulti-tasklearning,wetrainamodeltoperformmultipletasks simultaneously, hoping that performance on each will improve. c) in generative self-supervised learning, we remove part of the data and train the network to complete the missing information. here, the task is to fill in (inpaint) a masked portionoftheimage. thispermitstransferlearningwhennolabelsareavailable. images from cordts et al. (2016). draft: please send errata to [email protected] 9 regularization figure 9.13 data augmentation. for some problems, each data example can be transformed to augment the dataset. a) original image. b–h) various geometric andphotomet |
rictransformationsofthisimage. forimageclassification,allthese images still have the same label, “bird.” adapted from wu et al. (2015a). generating extra training data in this way is known as data augmentation. the aim is to teach the model to be indifferent to these irrelevant data transformations. 9.4 summary explicit regularization involves adding an extra term to the loss function that changes the position of the minimum. the term can be interpreted as a prior probability over the parameters. stochastic gradient descent with a finite step size does not neutrally descend to the minimum of the loss function. this bias can be interpreted as adding additional terms to the loss function, and this is known as implicit regularization. therearealsomanyheuristicsforimprovinggeneralization,includingearlystopping, dropout, ensembling, the bayesian approach, adding noise, transfer learning, multi-task learning, and data augmentation. there are four main principles behind these methods (figure9.14). wecan(i)encouragethefunctiontobesmoother(e.g.,l2regularization), (ii) increase the amount of data (e.g., data augmentation), (iii) combine models (e.g., ensembling), or (iv) search for wider minima (e.g., applying noise to network weights). this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 155 figure9.14regularizationmethods. theregularizationmethodsdiscussedinthis chapteraimtoimprovegeneralizationbyoneoffourmechanisms. somemethods aimtomakethemodeledfunctionsmoother. othermethodsincreasetheeffective amountofdata. thethirdgroupofmethodscombinemultiplemodelsandhence mitigate against uncertainty in the fitting process. finally, the fourth group of methods encourages the training process to converge to a wide minimum where smallerrorsintheestimatedparametersarelessimportant(seealsofigure20.11). anotherwaytoimprovegeneralizationistochoosethemodelarchitecturetosuitthe task. for example, in image segmentation, we can share parameters within the model, so we don’t need to independently learn what a tree looks like at every image location. chapters 10–13 consider architectural variations designed for different tasks. notes an overview and taxonomy of regularization techniques in deep learning can be found in kukačka et al. (2017). notably missing from the discussion in this chapter is batchnorm (szegedy et al., 2016) at its variants, which are described in chapter 11. regularization: l2 regularization penalizes the sum of squares of the network weights. this encourages the output function to change slowly (i.e., become smoother) and is the most used regularizationterm. itissometimesreferredtoasfrobeniusnormregularizationasitpenalizes the frobenius norms of the weight matrices. it is often also mistakenly referred to as “weight decay,” although this is a separate technique devised by hanson & pratt (1988) in which the parameters ϕ are updated as: ϕ←−(1−λ′)ϕ−α∂l, (9.13) ∂ϕ draft: please send errata to [email protected] 9 regularization where, as usual, α is the learning rate, and l is the loss. this is identical to gradient descent, exceptthattheweightsarereducedbyafactorof1−λ′beforethegradientupdate. forstandard sgd,weightdecayisequivalenttol2regularization(equation9.5)withcoefficientλ=λ′/2α. problem9.5 however, for adam, the learning rate α is different for each parameter, so l2 regularization andweightdecaydiffer. loshchilov&hutter(2019)presentadamw,whichmodifiesadamto implement weight decay correctly and show that this improves performance. other choices of vector norm encourage sparsity in the weights. the l0 regularization term appendixb.3.2 applies a fixed penalty for every non-zero weight. the effect is to “prune” the network. l0 vectornorms regularizationcanalsobeusedtoencouragegroupsparsity; thismightapplyafixedpenaltyif anyoftheweightscontributingtoag |
ivenhiddenunitarenon-zero. iftheyareallzero,wecan remove the unit, decreasing the model size and making inference faster. unfortunately,l0regularizationischallengingtoimplementsincethederivativeoftheregular- ization term is not smooth, and more sophisticated fitting methods are required (see louizos et al., 2018). somewhere between l2 and l0 regularization is l1 regularization or lasso (leastabsoluteshrinkageandselectionoperator),whichimposesapenaltyontheabsoluteval- ues of the weights. l2 regularization somewhat discourages sparsity in that the derivative of the squared penalty decreases as the weight becomes smaller, lowering the pressure to make it smallerstill. l1regularizationdoesnothavethisdisadvantage,asthederivativeofthepenalty is constant. this can produce sparser solutions than l2 regularization but is much easier to problem9.6 optimize than l0 regularization. sometimes both l1 and l2 regularization terms are used, which is termed an elastic net penalty (zou & hastie, 2005). a different approach to regularization is to modify the gradients of the learning algorithm withouteverexplicitlyformulatinganewlossfunction(e.g.,equation9.13). thisapproachhas been used to promote sparsity during backpropagation (schwarz et al., 2021). theevidenceontheeffectivenessofexplicitregularizationismixed. zhangetal.(2017a)showed thatl2regularizationcontributeslittletogeneralization. ithasbeenproventhatthelipschitz constant of the network (how fast the function can change as we modify the input) bounds appendixb.1.1 the generalization error (bartlett et al., 2017; neyshabur et al., 2018). however, the lipschitz lipschitzconstant constant depends on the product of the spectral norms of the weight matrices ω , which are k only indirectly dependent on the magnitudes of the individual weights. bartlett et al. (2017), appendixb.3.2 neyshabur et al. (2018), and yoshida & miyato (2017) all add terms that indirectly encourage spectralnorm the spectral norms to be smaller. gouk et al. (2021) take a different approach and develop an algorithmthatconstrainsthelipschitzconstantofthenetworktobebelowaparticularvalue. implicit regularization in gradient descent: the gradient descent step is: ϕ =ϕ +α·g[ϕ ], (9.14) 1 0 0 whereg[ϕ ]isthenegativeofthegradientofthelossfunction,andαisthestepsize. asα→0, 0 the gradient descent process can be described by a differential equation: ∂ϕ =g[ϕ]. (9.15) ∂t fortypicalstepsizesα,thediscreteandcontinuousversionsconvergetodifferentsolutions. we can use backward error analysis to find a correction g [ϕ] to the continuous version: 1 ∂ϕ ≈g[ϕ]+αg [ϕ]+..., (9.16) ∂t 1 so that it gives the same result as the discrete version. considerthefirsttwotermsofataylorexpansionofthemodifiedcontinuoussolutionϕaround initial position ϕ : 0 this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 157 (cid:12) ϕ[α] ≈ ϕ+α∂ϕ + α2∂2ϕ(cid:12)(cid:12)(cid:12) ∂t 2 ∂t2 ϕ=ϕ0 (cid:18) (cid:19)(cid:12) ≈ ϕ+α(g[ϕ]+αg [ϕ])+ α2 ∂g[ϕ]∂ϕ +α∂g1[ϕ]∂ϕ (cid:12)(cid:12)(cid:12) 1 2 ∂ϕ ∂t ∂ϕ ∂t (cid:18) (cid:19)ϕ(cid:12)=ϕ0 = ϕ+α(g[ϕ]+αg |
[ϕ])+ α2 ∂g[ϕ]g[ϕ]+α∂g1[ϕ]g[ϕ] (cid:12)(cid:12)(cid:12) 1 2 ∂ϕ ∂ϕ (cid:18) (cid:19)(cid:12) ϕ=ϕ0 (cid:12) ≈ ϕ+αg[ϕ]+α2 g [ϕ]+ 1∂g[ϕ]g[ϕ] (cid:12)(cid:12) , (9.17) 1 2 ∂ϕ ϕ=ϕ0 where in the second line, we have introduced the correction term (equation 9.16), and in the final line, we have removed terms of greater order than α2. note that the first two terms on the right-hand side ϕ +αg[ϕ ] are the same as the discrete 0 0 update(equation9.14). hence,tomakethecontinuousanddiscreteversionsarriveatthesame place, the third term on the right-hand side must equal zero, allowing us to solve for g [ϕ]: 1 1∂g[ϕ] g [ϕ]=− g[ϕ]. (9.18) 1 2 ∂ϕ during training, the evolution function g[ϕ] is the negative of the gradient of the loss: ∂ϕ ≈ g[ϕ]+αg [ϕ] ∂t 1 (cid:18) (cid:19) ∂l α ∂2l ∂l = − − . (9.19) ∂ϕ 2 ∂ϕ2 ∂ϕ this is equivalent to performing continuous gradient descent on the loss function: (cid:13) (cid:13) α(cid:13)(cid:13)∂l(cid:13)(cid:13)2 lgd[ϕ]=l[ϕ]+ 4 (cid:13)∂ϕ(cid:13) , (9.20) because the right-hand side of equation 9.19 is the derivative of that in equation 9.20. this formulation of implicit regularization was developed by barrett & dherin (2021) and extended to stochastic gradient descent by smith et al. (2021). smith et al. (2020) and others haveshownthatstochasticgradientdescentwithsmallormoderatebatchsizesoutperformsfull batch gradient descent on the test set, and this may in part be due to implicit regularization. relatedly,jastrzębskietal.(2021)andcohenetal.(2021)bothshowthatusingalargelearn- ingratereducesthetendencyoftypicaloptimizationtrajectoriestomoveto“sharper”partsof the loss function (i.e., where at least one direction has high curvature). this implicit regular- ization effect of large learning rates can be approximated by penalizing the trace of the fisher information matrix, which is closely related to penalizing the gradient norm in equation 9.20 (jastrzębski et al., 2021). early stopping: bishop(1995)andsjöberg&ljung(1995)arguedthatearlystoppinglimits the effective solution space that the training procedure can explore; given that the weights are initialized to small values, this leads to the idea that early stopping helps prevent the weights from getting too large. goodfellow et al. (2016) show that under a quadratic approximation of the loss function with parameters initialized to zero, early stopping is equivalent to l2 reg- ularization in gradient descent. the effective regularization weight λ is approximately 1/(τα) where α is the learning rate, and τ is the early stopping time. draft: please send errata to [email protected] 9 regularization ensembling: ensembles can be trained using different random seeds (lakshminarayanan et al., 2017), hyperparameters (wenzel et al., 2020b), or even entirely different families of models. themodelscanbecombinedbyaveragingtheirpredictions,weightingthepredictions, or stacking (wolpert, 1992), in which the results are combined using another machine learning model. lakshminarayanan et al. (2017) showed that averaging the |
output of independently trained networks can improve accuracy, calibration, and robustness. conversely, frankle et al. (2020) showed that if we average together the weights to make one model, the network fails. fort et al. (2019) compared ensembling solutions that resulted from different initializations with ensembling solutions that were generated from the same original model. for example, in the latter case, they consider exploring around the solution in a limited subspace to find other appendixb.3.6 goodnearbypoints. theyfoundthatbothtechniquesprovidecomplementarybenefitsbutthat subspaces genuine ensembling from different random starting points provides a bigger improvement. anefficientwayofensemblingistocombinemodelsfromintermediatestagesoftraining. tothis end, izmailov et al. (2018) introduce stochastic weight averaging, in which the model weights are sampled at different time steps and averaged together. as the name suggests, snapshot ensembles (huang et al., 2017a) also store the models from different time steps and average their predictions. the diversity of these models can be improved by cyclically increasing and decreasing the learning rate. garipov et al. (2018) observed that different minima of the loss functionareoftenconnectedbyalow-energypath(i.e.,apathwithalowlosseverywherealong it). motivated by this observation, they developed a method that explores low-energy regions around an initial solution to provide diverse models without retraining. this is known as fast geometric ensembling. a review of ensembling methods can be found in ganaie et al. (2022). dropout: dropoutwasfirstintroducedbyhintonetal.(2012b)andsrivastavaetal.(2014). dropout is applied at the level of hidden units. dropping a hidden unit has the same effect as temporarily setting all the incoming and outgoing weights and the bias to zero. wan et al. (2013)generalizeddropoutbyrandomlysettingindividualweightstozero. gal&ghahramani (2016)andkendall&gal(2017)proposedmontecarlodropout,inwhichinferenceiscomputed withseveraldropoutpatterns,andtheresultsareaveragedtogether. gal&ghahramani(2016) argued that this could be interpreted as approximating bayesian inference. dropout is equivalent to applying multiplicative bernoulli noise to the hidden units. similar benefits derive from using other distributions, including the normal (srivastava et al., 2014; shen et al., 2017), uniform (shen et al., 2017), and beta distributions (liu et al., 2019b). adding noise: bishop (1995) and an (1996) added gaussian noise to the network inputs to improveperformance. bishop(1995)showedthatthisisequivalenttoweightdecay. an(1996) also investigated adding noise to the weights. devries & taylor (2017a) added gaussian noise tothehiddenunits. therandomized relu(xuetal.,2015)appliesnoiseinadifferentwayby making the activation functions stochastic. label smoothing: labelsmoothingwasintroducedbyszegedyetal.(2016)forimageclassi- ficationbuthassincebeenshowntobehelpfulinspeechrecognition(chorowski&jaitly,2017), machine translation (vaswani et al., 2017), and language modeling (pereyra et al., 2017). the precise mechanism by which label smoothing improves test performance isn’t well understood, although müller et al. (2019a) show that it improves the calibration of the predicted output probabilities. a closely related technique is disturblabel (xie et al., 2016), in which a certain percentage of the labels in each batch are randomly switched at each training iteration. finding wider minima: itisthoughtthatwiderminimageneralizebetter(seefigure20.11). here, the exact values of the weights are less important, so performance should be robust to errorsintheirestimates. oneofthereasonsthatapplyingnoisetopartsofthenetworkduring training is effective is that it encourages the network to be indifferent to their exact values. chaudhari et al. (2019) developed a variant of sgd that biases the optimization toward flat minima,whichtheycallentropy sgd.theideaistoincorporatelocalentropyasaterminthe loss function |
Subsets and Splits