Upload dataset_chunk_47.csv with huggingface_hub
Browse files- dataset_chunk_47.csv +2 -0
dataset_chunk_47.csv
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
text
|
2 |
+
"� draft: please send errata to [email protected] 5 loss functions where the first term disappears, as it has no dependence on θ. the remaining second term is known as the cross-entropy. it can be interpreted as the amount of uncertainty that remains in one distribution after taking into account what we already know from the other. now, we substitute in the definition of q(y) from equation 5.28: "" ! # z ∞ xi (cid:2) (cid:3) 1 θˆ = argmin − δ[y−y ] log pr(y|θ) dy i i θ −∞ "" i=1 # xi (cid:2) (cid:3) 1 = argmin − log pr(y |θ) i i θ "" i=1 # xi (cid:2) (cid:3) = argmin − log pr(y |θ) . (5.30) i θ i=1 the product of the two terms in the first line corresponds to pointwise multiplying the point masses in figure 5.12a with the logarithm of the distribution in figure 5.12b. we are left with a finite set of weighted probability masses centered on the data points. in the last line, we have eliminated the constant scaling factor 1/i, as this does not affect the position of the minimum. inmachinelearning,thedistributionparametersθarecomputedbythemodelf[x ,ϕ], i so we have: "" # xi (cid:2) (cid:3) ϕˆ =argmin − log pr(y |f[x ,ϕ]) . (5.31) i i ϕ i=1 this is precisely the negative log-likelihood criterion from the recipe in section 5.2. it follows that the negative log-likelihood criterion (from maximizing the data likeli- hood) and the cross-entropy criterion (from minimizing the distance between the model and empirical data distributions) are equivalent. 5.8 summary we previously considered neural networks as directly predicting outputs y from data x. in this chapter, we shifted perspective to think about neural networks as computing the parameters θ of probability distributions pr(y|θ) over the output space. this led to a principled approach to building loss functions. we selected model parameters ϕ that maximized the likelihood of the observed data under these distributions. we saw that this is equivalent to minimizing the negative log-likelihood. the least squares criterion for regression is a natural consequence of this approach; it follows from the assumption that y is normally distributed and that we are predicting the mean. we also saw how the regression model could be (i) extended to estimate the uncertainty over the prediction and (ii) extended to make that uncertainty dependent ontheinput(theheteroscedasticmodel). weappliedthesameapproachtoboth binary and multiclass classification and derived loss functions for each. we discussed how to this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 73 tackle more complex data types and how to deal with multiple outputs. finally, we argued that cross-entropy is an equivalent way to think about fitting models. in previous chapters, we developed neural network models. in this chapter, we de- veloped loss functions for deciding how well a model describes the training data for a given set of parameters. the next chapter considers model training, in which we aim to find the model parameters that minimize this loss. notes losses based on the normal distribution: nix & weigend (1994) and williams (1996) investigated heteroscedastic nonlinear regression in which both the mean and the variance of the output are functions of the input. in the context of unsupervised learning, burda et al. (2016)usealossfunctionbasedonamultivariatenormaldistributionwithdiagonalcovariance, anddortaetal. (2018)usea lossfunction basedon anormal distributionwith fullcovariance. robust regression: qietal.(2020)investigatethepropertiesofregressionmodelsthatmin- imize mean absolute error rather than mean squared error. this loss function follows from assuming a laplace distribution over the outputs and estimates the median output for a given input rather than the mean. barron (2019) presents a loss function"
|