|
"we originally introduced when we discussed linear regression in chapter 2: xi (cid:0) (cid:1) l[ϕ]= y −f[x ,ϕ] 2. (5.11) i i i=1 we see that the least squares loss function follows naturally from the assumptions that notebook5.1 the prediction errors are (i) independent and (ii) drawn from a normal distribution with leastsquares loss mean µ=f[xi,ϕ] (figure 5.4). 5.3.2 inference the network no longer directly predicts y but instead predicts the mean µ = f[x,ϕ] of the normal distribution over y. when we perform inference, we usually want a single “best” point estimate yˆ, so we take the maximum of the predicted distribution: h i yˆ=argmax pr(y|f[x,ϕˆ]) . (5.12) y fortheunivariatenormal,themaximumpositionisdeterminedbythemeanparameterµ (figure 5.3). this is precisely what the model computed, so yˆ=f[x,ϕˆ]. 5.3.3 estimating variance to formulate the least squares loss function, we assumed that the network predicted the mean of a normal distribution. the final expression in equation 5.11 (perhaps surpris- this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.5.3 example 1: univariate regression 63 figure 5.4 equivalence of least squares and maximum likelihood loss for the normal distribution. a) consider the linear model from figure 2.2. the least squarescriterionminimizesthesumofthesquaresofthedeviations(dashedlines) between the model prediction f[x ,ϕ] (green line) and the true output values y i i (orange points). here the fit is good, so these deviations are small (e.g., for the two highlighted points). b) for these parameters, the fit is bad, and the squared deviations are large. c) the least squares criterion follows from the assumption that the model predicts the mean of a normal distribution over the outputs and that we maximize the probability. for the first case, the model fits well, so the probability pr(y |x ) of the data (horizontal orange dashed lines) is large (and i i thenegativelogprobabilityissmall). d)forthesecondcase,themodelfitsbadly, so the probability is small and the negative log probability is large. draft: please send errata to [email protected] 5 loss functions ingly) does not depend on the variance σ2. however, there is nothing to stop us from treating σ2 as a parameter of the model and minimizing equation 5.9 with respect to both the model parameters ϕ and the distribution variance σ2: "" (cid:20) (cid:20) (cid:21)(cid:21)# xi 1 (y −f[x ,ϕ])2 ϕˆ,σˆ2 =argmin − log √ exp − i i . (5.13) ϕ,σ2 i=1 2πσ2 2σ2 ininference, themodelpredictsthemeanµ=f[x,ϕˆ]fromtheinput, andwelearnedthe variance σˆ2 during the training process. the former is the best prediction. the latter tells us about the uncertainty of the prediction. 5.3.4 heteroscedastic regression themodelaboveassumesthatthevarianceofthedataisconstanteverywhere. however, this might be unrealistic. when the uncertainty of the model varies as a function of the input data, we refer to this as heteroscedastic (as opposed to homoscedastic, where the uncertainty is constant). a simple way to model this is to train a neural network f[x,ϕ] that computes both the mean and the variance. for example, consider a shallow network with two outputs. we denote the first output as f [x,ϕ] and use this to predict the mean, and we denote 1 the second output as f [x,ϕ] and use it to predict the variance. 2 there is one complication; the variance must be positive, but we can’t guarantee that the network will always produce" |