dl_dataset_1 / dataset_chunk_120.csv
Vishwas1's picture
Upload dataset_chunk_120.csv with huggingface_hub
88cbda6 verified
text
", we initialize the network parameters so that the expected variance of the activations(intheforwardpass)andgradients(inthebackwardpass)remainsthesame between layers. he initialization (section 7.5) achieves this for relu activations by initializing the biases β to zero and choosing normally distributed weights ω with mean zero and variance 2/d where d is the number of hidden units in the previous layer. h h now consider a residual network. we do not have to worry about the intermediate values or gradients vanishing with network depth since there exists a path whereby each layer directly contributes to the network output (equation 11.5 and figure 11.4b). however, even if we use he initialization within the residual block, the values in the forward pass increase exponentially as we move through the network. toseewhy,considerthatweaddtheresultoftheprocessingintheresidualblockback totheinput. eachbranchhassome(uncorrelated)variability. hence,theoverallvariance problem11.4 increases when we recombine them. with relu activations and he initialization, the expected variance is unchanged by the processing in each block. consequently, when we recombine with the input, the variance doubles (figure 11.6a), growing exponentially withthenumberofresidualblocks. thislimitsthepossiblenetworkdepthbeforefloating point precision is exceeded in the forward pass. a similar argument applies to the gradients in the backward pass of the backpropagation algorithm. hence,residualnetworksstillsufferfromunstableforwardpropagationandexploding gradientsevenwithheinitialization. oneapproachthatwouldstabilizetheforwardand backwardpasseswouldbeto√useheinitializationandthenmultiplythecombinedoutput of each residual block by 1/ 2 to compensate for the doubling (figure 11.6b). however, it is more usual to use batch normalization. 11.4 batch normalization batch normalizationorbatchnormshiftsandrescaleseachactivationhsothatitsmean and variance across the batch b become values that are learned during training. first, the empirical mean m and standard deviation s are computed: h h this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.11.4 batch normalization 193 figure 11.6 variance in residual networks. a) he initialization ensures that the expectedvarianceremainsunchangedafteralinearplusrelulayerf . unfortu- k nately,inresidualnetworks,theinputofeachblockisaddedbacktotheoutput, sothevariancedoublesateachlayer(graynumbersindicatevariance√)andgrows exponentially. b) one approach would be to rescale the signal by 1/ 2 between each residual block. c) a second method uses batch normalization (bn) as the first step in the residual block and initializes the associated offset δ to zero and scaleγ toone. thistransformstheinputtoeachlayertohaveunitvariance,and with he initialization, the output variance will also be one. now the variance increases linearly with the number of residual blocks. a side-effect is that, at initialization, later network layers are dominated by the residual connection and are hence close to computing the identity. x 1 m = h h |b| i s i∈b x 1 s = (h −m )2, (11.7) h |b| i h i∈b where all quantities are scalars. then we use these statistics to standardize the batch appendixc.2.4 activations to have mean zero and unit variance: standardization h −m h ← i h ∀i∈b, (11.8) i s +ϵ h where ϵ is a small number that prevents division by zero if h is the same for every i member of the batch and s =0. h finally, the normalized variable is scaled by γ and shifted by δ: h ←γh +δ ∀i∈b. (11.9) i i draft: please send errata to [email protected] 11 residual networks after this operation, the activations have mean δ and standard deviation γ across all problem11.5 members of the batch. both of these quantities are learned during training. b"