Upload dataset_chunk_45.csv with huggingface_hub
Browse files- dataset_chunk_45.csv +2 -0
dataset_chunk_45.csv
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
text
|
2 |
+
"one. the kth output of the softmax function is: exp[z ] softmax [z]= p k , (5.22) k kk′=1exp[zk′] where the exponential functions ensure positivity, and the sum in the denominator en- appendixb.1.3 sures that the k numbers sum to one. exponential function the likelihood that input x has label y (figure 5.10) is hence: h i pr(y =k|x)=softmax f[x,ϕ] . (5.23) k the loss function is the negative log-likelihood of the training data: this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.5.6 multiple outputs 69 xi h h ii l[ϕ] = − log softmax f[x ,ϕ] yi i i=1 "" #! xi xk = − fyi[xi,ϕ]−log exp[ fk′[xi,ϕ]] , (5.24) i=1 k′=1 where f [x,ϕ] denotes the kth output of the neural network. for reasons that will be k explained in section 5.7, this is known as the multiclass cross-entropy loss. the transformed model output represents a categorical distribution over possible classes y ∈{1,2,...,k}. for a point estimate, we take the most probable category yˆ= notebook5.3 multiclass argmax [pr(y = k|f[x,ϕˆ])]. this corresponds to whichever curve is highest for that cross-entropyloss k value of x in figure 5.10. 5.5.1 predicting other data types in this chapter, we have focused on regression and classification because these problems are widespread. however, to make different types of predictions, we simply choose an appropriatedistributionoverthatdomainandapplytherecipeinsection5.2. figure5.11 enumerates a series of probability distributions and their prediction domains. some of problems5.3–5.6 these are explored in the problems at the end of the chapter. 5.6 multiple outputs often, we wish to make more than one prediction with the same model, so the target output y is a vector. for example, we might want to predict a molecule’s melting and boiling point (a multivariate regression problem, figure 1.2b) or the object class at every point in an image (a multivariate classification problem, figure 1.4a). while it is possible to define multivariate probability distributions and use a neural network to modeltheirparametersasafunctionoftheinput,itismoreusualtotreateachprediction as independent. independence implies that we treat the probability pr(y|f[x,ϕ]) as a product of univariate terms for each element y ∈y: appendixc.1.5 d independence y pr(y|f[x,ϕ])= pr(y |f [x,ϕ]), (5.25) d d d where f [x,ϕ] is the dth set of network outputs, which describe the parameters of the d distribution over y . for example, to predict multiple continuous variables y ∈ r, we d d useanormaldistributionforeachy ,andthenetworkoutputsf [x,ϕ]predictthemeans d d ofthese distributions. topredict multiplediscrete variablesy ∈{1,2,...,k}, weuse a d categorical distribution for each y . here, each set of network outputs f [x,ϕ] predicts d d the k values that contribute to the categorical distribution for y . d draft: please send errata to [email protected] 5 loss functions data type domain distribution use univariate, continuous, y∈r univariate regression unbounded normal univariate, continuous, y∈r laplace robust unbounded or t-distribution regression univariate, continuous, y∈r mixture of multimodal unbounded gaussians regression univariate, continuous, y∈r+ exponential predicting bounded below or gamma magnitude univariate, continuous, y∈[0,1] beta predicting bounded proportions multivariate, continuous, y∈rk multivariate multivariate unbounded normal regression univariate"
|