Vishwas1 commited on
Commit
f14aa2b
·
verified ·
1 Parent(s): 8529d93

Upload dataset_chunk_43.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. dataset_chunk_43.csv +2 -0
dataset_chunk_43.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ text
2
+ "a positive output. to ensure that the computed variance is positive, we pass the second network output through a function that maps an arbitrary value to a positive one. a suitable choice is the squaring function, giving: µ = f [x,ϕ] 1 σ2 = f [x,ϕ]2, (5.14) 2 which results in the loss function: "" (cid:18) "" # (cid:19)# xi 1 (y −f [x ,ϕ])2 ϕˆ =argmin − log p − i 1 i . (5.15) ϕ i=1 2πf2[xi,ϕ]2 2f2[xi,ϕ]2 homoscedastic and heteroscedastic models are compared in figure 5.5. 5.4 example 2: binary classification inbinary classification, thegoalistoassignthedataxtooneoftwodiscreteclassesy ∈ {0,1}. in this context, we refer to y as a label. examples of binary classification include (i) predicting whether a restaurant review is positive (y = 1) or negative (y = 0) from text data x and (ii) predicting whether a tumor is present (y = 1) or absent (y = 0) from an mri scan x. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.5.4 example 2: binary classification 65 figure 5.5 homoscedastic vs. heteroscedastic regression. a) a shallow neural network for homoscedastic regression predicts just the mean µ of the output distribution from the input x. b) the result is that while the mean (blue line) is a piecewise linear function of the input x, the variance is constant everywhere (arrows and gray region show ±2 standard deviations). c) a shallow neural network for heteroscedastic regression also predicts the variance σ2 (or, more precisely, computes its square root, which we then square). d) the standard deviation now also becomes a piecewise linear function of the input x. figure 5.6 bernoulli distribution. the bernoulli distribution is defined on the domain z ∈ {0,1} and has a single pa- rameter λ that denotes the probability of observing z = 1. it follows that the probability of observing z=0 is 1−λ. draft: please send errata to [email protected] 5 loss functions figure 5.7 logistic sigmoid function. this function maps the real line z ∈ r to numbers between zero and one, sosig[z]∈[0,1]. aninputof0ismapped to 0.5. negative inputs are mapped to numbers below 0.5, and positive inputs to numbers above 0.5. onceagain,wefollowtherecipefromsection5.2toconstructthelossfunction. first, we choose a probability distribution over the output space y ∈{0,1}. a suitable choice is the bernoulli distribution, which is defined on the domain {0,1}. this has a single parameterλ∈[0,1]thatrepresentstheprobabilitythatytakesthevalueone(figure5.6): ( 1−λ y =0 pr(y|λ)= , (5.16) λ y =1 which can equivalently be written as: pr(y|λ)=(1−λ)1−y·λy. (5.17) second, we set the machine learning model f[x,ϕ] to predict the single distribution parameterλ. however,λcanonlytakevaluesintherange[0,1],andwecannotguarantee thatthenetworkoutputwilllieinthisrange. consequently,wepassthenetworkoutput through a function that maps the real numbers r to [0,1]. a suitable function is the logistic sigmoid (figure 5.7): problem5.1 1 sig[z]= . (5.18) 1+exp[−z] hence, we predict the distribution parameter as λ=sig[f[x,ϕ]]. the likelihood is now: pr(y|x)=(1−sig[f[x,ϕ]])1−y·sig[f[x,ϕ]]y. (5.19) this is depicted in figure 5.8 for a shallow"