|
"a three-layer network that cannot be realized by any two-layer network if the capacity is sub-exponential in the input dimension. cohen et al. (2016), safran & shamir (2017), and poggio et al. (2017) also demonstrate functions that deep networks can approximateefficiently,butshallowonescannot. liang&srikant(2016)showthatforabroad class of functions, including univariate functions, shallow networks require exponentially more hidden units than deep networks for a given upper bound on the approximation error. width efficiency: luetal.(2017)investigatewhethertherearewideshallownetworks(i.e., shallow networks with lots of hidden units) that cannot be realized by narrow networks whose depth is not substantially larger. they show that there exist classes of wide, shallow networks that can only be expressed by narrow networks with polynomial depth. this is known as the width efficiency of neural networks. this polynomial lower bound on width is less restrictive than the exponential lower bound on depth, suggesting that depth is more important. vardi et al. (2022) subsequently showed that the price for making the width small is only a linear increase in the network depth for networks with relu activations. problems problem 4.1∗ consider composing the two neural networks in figure 4.8. draw a plot of the relationship between the input x and output y′ for x∈[−1,1]. problem 4.2 identify the four hyperparameters in figure 4.6. problem 4.3 using the non-negative homogeneity property of the relu function (see prob- lem 3.5), show that: draft: please send errata to [email protected] 4 deep neural networks figure 4.8composition oftwonetworksforproblem4.1. a)theoutput y ofthe first network becomes the input to the second. b) the first network computes this function with output values y ∈ [−1,1]. c) the second network computes this function on the input range y∈[−1,1]. h i (cid:20) (cid:20) (cid:21)(cid:21) 1 1 relu β +λ ·ω relu[β +λ ·ω x] =λ λ ·relu β +ω relu β +ω x , 1 1 1 0 0 0 0 1 λ λ 1 1 λ 0 0 0 1 0 (4.18) where λ and λ are non-negative scalars. from this, we see that the weight matrices can be 0 1 rescaled by any magnitude as long as the biases are also adjusted, and the scale factors can be re-applied at the end of the network. problem4.4writeouttheequationsforadeepneuralnetworkthattakesd =5inputs,d =4 i o outputs and has three hidden layers of sizes d = 20, d = 10, and d = 7, respectively, in 1 2 3 both the forms of equations 4.15 and 4.16. what are the sizes of each weight matrix ω• and bias vector β ? • problem 4.5 consider a deep neural network with d =5 inputs, d =1 output, and k =20 i o hidden layers containing d=30 hidden units each. what is the depth of this network? what is the width? problem 4.6 consider a network with d =1 input, d =1 output, and k =10 layers, with i o d = 10 hidden units in each. would the number of weights increase more if we increased the depth by one or the width by one? provide your reasoning. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 55 problem 4.7 choose values for the parameters ϕ={ϕ ,ϕ ,ϕ ,ϕ ,θ ,θ ,θ ,θ ,θ ,θ } for 0 1 2 3 10 11 20 21 30 31 the shallow neural network in equation 3.1 that will define an identity function over a finite range x∈[a,b]. problem 4.8∗ figure 4.9 shows the activations in the three hidden units of a shallow network (as in figure 3.3). the slopes in the hidden units are 1.0, 1.0, and -1.0" |