Vishwas1 commited on
Commit
0f5a9c7
·
verified ·
1 Parent(s): 55e1e4c

Upload dataset_chunk_52.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. dataset_chunk_52.csv +2 -0
dataset_chunk_52.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ text
2
+ "in equation 6.3. in this case, we have used a line search procedure to find the value of α that decreases the loss the most at each iteration. 6.1.2 gabor model example loss functions for linear regression problems (figure 6.1c) always have a single well- defined global minimum. more formally, they are convex, which means that no chord problem6.2 (line segment between two points on the surface) intersects the function. convexity implies that wherever we initialize the parameters, we are bound to reach the minimum if we keep walking downhill; the training procedure can’t fail. unfortunately, loss functions for most nonlinear models, including both shallow and deepnetworks, are non-convex. visualizingneural networkloss functions ischallenging duetothenumberofparameters. hence,wefirstexploreasimplernonlinearmodelwith two parameters to gain insight into the properties of non-convex loss functions: (cid:18) (cid:19) (ϕ +0.06·ϕ x)2 f[x,ϕ]=sin[ϕ +0.06·ϕ x]·exp − 0 1 . (6.8) 0 1 32.0 this gabor model maps scalar input x to scalar output y and consists of a sinusoidal problems6.3–6.5 component (creating an oscillatory function) multiplied by a negative exponential com- ponent (causing the amplitude to decrease as we move from the center). it has two parameters ϕ = [ϕ ,ϕ ]t, where ϕ ∈ r determines the mean position of the function 0 1 0 and ϕ ∈r+ stretches or squeezes it along the x-axis (figure 6.2). 1 consider a training set of i examples {x ,y } (figure 6.3). the least squares loss i i function for i training examples is defined as: xi l[ϕ]= (f[x ,ϕ]−y )2. (6.9) i i i=1 once more, the goal is to find the parameters ϕˆ that minimize this loss. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.6.1 gradient descent 81 figure 6.2 gabor model. this nonlinear model maps scalar input x to scalar output y and has parameters ϕ = [ϕ ,ϕ ]t. it describes a sinusoidal function 0 1 that decreases in amplitude with distance from its center. parameter ϕ ∈ r 0 determines the position of the center. as ϕ increases, the function moves left. 0 parameter ϕ ∈r+ squeezes the function along the x-axis relative to the center. 1 as ϕ increases, the function narrows. a–c) model with different parameters. 1 figure 6.3 training data for fitting the gabormodel. thetrainingdatasetcon- tains28input/outputexamples{x ,y }. i i these data were created by uniformly sampling x ∈ [−15,15], passing the i samplesthroughagabormodelwithpa- rameters ϕ = [0.0,16.6]t, and adding normally distributed noise. 6.1.3 local minima and saddle points figure 6.4 depicts the loss function associated with the gabor model for this dataset. there are numerous local minima (cyan circles). here the gradient is zero, and the loss problem6.6 increases if we move in any direction, but we are not at the overall minimum of the function. thepointwiththelowestlossisknownastheglobal minimumandisdepicted by the gray circle. if we start in a random position and use gradient descent to go downhill, there is problems6.7–6.8 no guarantee that we will wind up at the global minimum and find the best parameters (figure 6.5a). it’s equally or even more likely that the algorithm will terminate in one of the local minima. furthermore, there is no way of knowing whether there is a better solution elsewhere. draft: please send errata to [email protected] 6 fitting models figure6.4lossfunctionforthegabormodel. a)thelossfunctionisnon-convex, withmultiplelocalminima"