Vishwas1 commited on
Commit
28748eb
·
verified ·
1 Parent(s): f14aa2b

Upload dataset_chunk_44.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. dataset_chunk_44.csv +2 -0
dataset_chunk_44.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ text
2
+ "neural network model. the loss function is the negative log-likelihood of the training set: xi h i h i l[ϕ]= −(1−y )log 1−sig[f[x ,ϕ]] −y log sig[f[x ,ϕ]] . (5.20) i i i i i=1 forreasonstobeexplainedinsection5.7,thisisknownasthebinary cross-entropy loss. notebook5.2 binary the transformed model output sig[f[x,ϕ]] predicts the parameter λ of the bernoulli cross-entropyloss distribution. this represents the probability that y = 1, and it follows that 1 − λ problem5.2 represents the probability that y =0. when we perform inference, we may want a point estimate of y, so we set y =1 if λ>0.5 and y =0 otherwise. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.5.5 example 3: multiclass classification 67 figure 5.8 binary classification model. a) the network output is a piecewise linearfunctionthatcantakearbitraryrealvalues. b)thisistransformedbythe logistic sigmoid function, which compresses these values to the range [0,1]. c) the transformed output predicts the probability λ that y = 1 (solid line). the probability that y = 0 is hence 1−λ (dashed line). for any fixed x (vertical slice), we retrieve the two values of a bernoulli distribution similar to that in figure 5.6. the loss function favors model parameters that produce large values of λ at positions x that are associated with positive examples y =1 and small i i values of λ at positions associated with negative examples y =0. i figure 5.9categoricaldistribution. the categoricaldistributionassignsprobabil- itiestok>2categories,withassociated probabilities λ ,λ ,...,λ . here, there 1 2 k arefivecategories,so k =5. toensure that this is a valid probability distribu- tion, each parameter λ must lie in the k range [0,1], and all k parameters must sum to one. 5.5 example 3: multiclass classification thegoalofmulticlassclassificationistoassignaninputdataexamplextooneofk >2 classes,soy ∈{1,2,...,k}. real-worldexamplesinclude(i)predictingwhichofk =10 digitsy ispresentinanimagexofahandwrittennumberand(ii)predictingwhichofk possible words y follows an incomplete sentence x. we once more follow the recipe from section 5.2. we first choose a distribution over the prediction space y. in this case, we have y ∈ {1,2,...,k}, so we choose the categorical distribution (figure 5.9), which is defined on this domain. this has k parameters λ ,λ ,...,λ , which determine the probability of each category: 1 2 k draft: please send errata to [email protected] 5 loss functions figure 5.10 multiclass classification for k=3 classes. a) the network has three piecewise linear outputs, which can take arbitrary values. b) after the softmax function,theseoutputsareconstrainedtobenon-negativeandsumtoone. hence, foragiveninputx,wecomputevalidparametersforthecategoricaldistribution: any vertical slice of this plot produces three values sum to one and would form the heights of the bars in a categorical distribution similar to figure 5.9. pr(y =k)=λ . (5.21) k the parameters are constrained to take values between zero and one, and they must collectively sum to one to ensure a valid probability distribution. then we use a network f[x,ϕ] with k outputs to compute these k parameters from the input x. unfortunately, the network outputs will not necessarily obey the afore- mentioned constraints. consequently, we pass the k outputs of the network through a function that ensures these constraints are respected. a suitable choice is the softmax function (figure 5.10). this takes an arbitrary vector of length k and returns a vector of the same length but where the elements are now in the range [0,1] and sum to"