doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1706.02515
76
• Equalities only solve square root and factor out the resulting terms (2(2x + y) + 1) and (2(x + y) + 0.878). • We set α = α01 and multiplied out. Thereafter we also factored out x in the numerator. Finally a quadratic equations was solved. 16 = The sub-function has its minimal value for minimal x = v7 = 1.5-0.8 = 1.2 and minimal y = pw = —1-0.1 = —0.1. We further minimize the function ww fu 12 0.1 ww fu 12 0.1 )) wwe 27 {2 —erfc > —0.le2T2 | 2 — erfc | — . 34 me (2-e (a) (-«(aym)) 0 ˜ξ(µ, ω, ν, τ, λ, α) in Eq. (25): We compute the minimum of the term in brackets of 2? pw wwe 27 | 2 —erfe | ——— } ] + ' ( (4) We compute the minimum of the term in brackets of HEU, w,v,T, A, a) in Eq. (25): # µ2 ω2 2ντ
1706.02515#76
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
77
We compute the minimum of the term in brackets of HEU, w,v,T, A, a) in Eq. (25): # µ2 ω2 2ντ 2? pw wwe 27 | 2 —erfe | ——— } ] + 35 ' ( (4) o> wootur)? w+ VT petaur)? pwd + 2vT 2 aR (- (a) erfc (“*) - el a) erfc (“= +4/=VvtT > ‘i Vij ViVi * — 2 — 2-0. 2 . _ 02, (- (eC #12) erfe () — (AR) erfe A ))) - V2v1.2 V2.2 0.1? 0.1 2 0.le212 | 2 —erfc + V1.2 0.212234 . ( (av) Viz ˜ξ(µ, ω, ν, τ, λ, α) has the sign Therefore the term in brackets of Eq. (25) is larger than zero. Thus, ∂ ∂µ of ω. Since ˜ξ is a function in µω (these variables only appear as this product), we have for x = µω
1706.02515#77
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
78
∂ ∂ν ˜ξ = ∂ ∂x ˜ξ ∂x ∂µ = ∂ ∂x ˜ξ ω (36) and ∂ ∂ω ˜ξ = ∂ ∂x ˜ξ ∂x ∂ω = ∂ ∂x ˜ξ µ . (37) ∂ ∂ω ˜ξ has the sign of ω, ∂ ∂µ µ ω ∂ ∂µ ˜ξ(µ, ω, ν, τ, λ01, α01) = ˜ξ(µ, ω, ν, τ, λ01, α01) . (38) Since ∂ ∂µ ˜ξ has the sign of µ. Therefore ∂ ∂ω g(µ, ω, ν, τ, λ01, α01) = ∂ ∂ω ˜ξ(µ, ω, ν, τ, λ01, α01) (39)
1706.02515#78
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
79
has the sign of ju. We now divide the ji-domain into —1 < ys < Oand0 < p < 1. Analogously we divide the w-domain into —0.1 <w < Oand0 <w < 0.1. In this domains g is strictly monotonically. For all domains g is strictly monotonically decreasing in v and strictly monotonically increasing in T. Note that we now consider the range 3 < v < 16. For the maximal value of g we set v = 3 (we set it to 3!) and 7 = 1.25. We consider now all combination of these domains: e -l<yw<O0and-0.1<w<0: g is decreasing in µ and decreasing in ω. We set µ = −1 and ω = −0.1. g(−1, −0.1, 3, 1.25, λ01, α01) = −0.0180173 . e -l<w<O0and0<w<01: g is increasing in µ and decreasing in ω. We set µ = 0 and ω = 0. g(0, 0, 3, 1.25, λ01, α01) = −0.148532 . (41) e©0<w<land-0.l<w<0:
1706.02515#79
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
81
Therefore the maximal value of g is −0.0180173. # A3.3 Proof of Theorem 3 First we recall TheoremB} Theorem (Increasing v). We consider X = Xo1, @ = agi and the two domains Qy {(u,w,v,T) | —01 < w < 0.1,-0.1 < w < 0.1,0.05 < v < 0.16,0.8 < r < 1.25} and OF = {(1,4,¥,7) | —0.1< p< 01,-0.1 <w <0.1,0.05 <v < 0.24,0.9 <7 < 1.25}. The mapping of the variance ˜ν(µ, ω, ν, τ, λ, α) given in Eq. (5) increases D(U,W,V,T,Ao1,Q01) > Y (44) in both QF and Q5. All fixed points (41, v) of mapping Eq. (5) and Eq. (4) ensure for 0.8 < 7 that D > 0.16 and for 0.9 < 7 that D > 0.24. Consequently, the variance mapping Eq. 5) and Eq. (A ensures a lower bound on the variance v.
1706.02515#81
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
82
Proof. The mean value theorem states that there exists a t ∈ [0, 1] for which ˜ξ(µ, ω, ν, τ, λ01, α01) − ˜ξ(µ, ω, νmin, τ, λ01, α01) = ∂ ∂ν ˜ξ(µ, ω, ν + t(νmin − ν), τ, λ01, α01) (ν − νmin) . (45) Therefore ˜ξ(µ, ω, ν, τ, λ01, α01) = ˜ξ(µ, ω, νmin, τ, λ01, α01) + ∂ ∂ν ˜ξ(µ, ω, ν + t(νmin − ν), τ, λ01, α01) (ν − νmin) . (46) Therefore we are interested to bound the derivative of the ξ-mapping Eq. (13) with respect to ν:
1706.02515#82
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
84
The sub-term Eq. (308) enters the derivative Eq. with a negative sign! According to LemmalI8} the minimal value of sub-term Eq. (308) is obtained by the largest largest v, by the smallest 7, and the largest y = jw = 0.01. Also the positive term erfc (4) + 2 is multiplied by 7, which is minimized by using the smallest 7. Therefore we can use the smallest 7 in whole formula Eq. to lower bound it. First we consider the domain 0.05 < v < 0.16 and 0.8 < 7 < 1.25. The factor consisting of the 1.0.01 exponential in front of the brackets has its smallest value for e~ 0.05-0-8 , Since erfe is monotonically decreasing we inserted the smallest argument via erfc (- oats! in order to obtain the maximal negative contribution. Thus, applying LemmajI8} we obtain the lower bound on the derivative: 122 . wir)? mw +2u7\? 12 G (- (a) erfe (“*) 2 BEY ente (“*))) _ 2 V2 vt V2 /vT
1706.02515#84
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
86
1 2 For applying the mean value theorem, we require the smallest (1). We follow the proof of Lemmals| which shows that at the minimum y = jw must be maximal and x = vt must be minimal. Thus, the smallest E(ju,w,v, 7, Aoi, 01) is €(0.01, 0.01, 0.05, 0.8, Ao1, 201) = 0.0662727 for 0.05 < v and 0.8 <7. Therefore the mean value theorem and the bound on (j1)? (Lemma[43} provide = E(,w,V,7, Nor, Q01) — (fi(u,w,Y,7, Aor, 01)” > (49) 0.0662727 + 0.969231(v — 0.05) — 0.005 = 0.01281115 + 0.969231v > 0.08006969 - 0.16 + 0.969231lv > 1.049301lv > Vv.
1706.02515#86
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
87
Next we consider the domain 0.05 < v < 0.24 and 0.9 < 7 < 1.25. The factor consisting of the exponential in front of the brackets has its smallest value for e~ 30.05-0-9 , Since erfe is monotonically . . se, (0.01 . . . decreasing we inserted the smallest argument via erfc ( Jave0e05 her | in order to obtain the maximal negative contribution. Thus, applying Lemma 18, we obtain the lower bound on the derivative: w+ur\? bw vr  tyre" tee (« (- (<9) ext (‘“*) ol HEY ente (“™))) _ V2 vt V2 /vT (50) ( pu J2./0T ) 2) # erfc + 2 > # ντ 2 10 ge PPbRR 2, (24 (- (clans TEE) rte (“ 09+ a) _ 20.24 -0.9 2:0.24-0.9+0.01 )? 2-0.24-0.9+ 0.01 0.01 del V2V0.24.0.9 ) erfc Cao) —erfc (-aw) + 2) > 0.976952. V2V0.24-0.9 V2V0.05 - 0.9 )
1706.02515#87
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
88
1 2 For applying the mean value theorem, we require the smallest (1). We follow the proof of Lemmas] which shows that at the minimum y = jzw must be maximal and x = v7 must be minimal. Thus, the smallest €(,w,v,7, Ao1, 01) is €(0.01, 0.01, 0.05, 0.9, Ao1, @o1) = 0.0738404 for 0.05 < v and 0.9 < 7. Therefore the mean value theorem and the bound on (jz)? (Lemma|43} gives v= E(p1,w, V,T, X01, 001) — (AH, w, YT, Ao1, ao1))? > (51) 0.0738404 + 0.976952(v — 0.05) — 0.005 = 0.0199928 + 0.976952 > 0.08330333 - 0.24 + 0.976952v > 1.060255v > v. # A3.4 Lemmata and Other Tools Required for the Proofs # A3.4.1 Lemmata for proofing Theorem 1 (part 1): Jacobian norm smaller than one
1706.02515#88
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
89
# A3.4.1 Lemmata for proofing Theorem 1 (part 1): Jacobian norm smaller than one In this section, we show that the largest singular value of the Jacobian of the mapping g is smaller than one. Therefore, g is a contraction mapping. This is even true in a larger domain than the original Ω. We do not need to restrict τ ∈ [0.95, 1.1], but we can extend to τ ∈ [0.8, 1.25]. The range of the other variables is unchanged such that we consider the following domain throughout this section: µ ∈ [−0.1, 0.1], ω ∈ [−0.1, 0.1], ν ∈ [0.8, 1.5], and τ ∈ [0.8, 1.25]. 19 (50) Jacobian of the mapping. In the following, we denote two Jacobians: (1) the Jacobian 7 of the mapping h : (u,v) +> (jt, €), and (2) the Jacobian H of the mapping g : (4,1) +> (fi, 7) because the influence of ji on v is small, and many properties of the system can already be seen on 7.
1706.02515#89
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
91
A (54) op 1 vr pw + VT pw =X Het erfe fe + 2 aad (c« erfe ( Jiu ) erfe (4 —) ) Tra(,w,v,7, 4,0) = 2 filu.w.1.7, 0) = (55) OV 1 ve, 2 22 =r | aet’* > erfe (“ + 7) —(a—1) eo a 4 V2 /vT TUT (2) Tar(U,0,U,7,A,0) = Deter ¥T As) = (56) wpUt +UT Mw G —et#t 2) erfe (4) + 22m 207 ong, ( Hee + =") ( —— ( plus )) [2 = 12a? ave erfe | ———_ } + pw | 2 — erfe + VVTe ( V2 /vT M J2/0T Tw OQ: To2(p1,W,V,7, A, a) = By Slt WoT As) = (57) 1 ¢ per ~ [ pw +r =r (« —et’+'D) erfe (—*) + uw +5 + 2vr pu Qa? e2HY+2"7 erfc (“*) — erfe (
1706.02515#91
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
93
Proof sketch: Bounding the largest singular value of the Jacobian. If the largest singular value of the Jacobian is smaller than 1, then the spectral norm of the Jacobian is smaller than 1. Then the mapping Eq. (4) and Eq. (5) of the mean and variance to the mean and variance in the next layer is contracting. We show that the largest singular value is smaller than 1 by evaluating the function S(µ, ω, ν, τ, λ, α) on a grid. Then we use the Mean Value Theorem to bound the deviation of the function S between grid points. Toward this end we have to bound the gradient of S with respect to (µ, ω, ν, τ ). If all function values plus gradient times the deltas (differences between grid points and evaluated points) is still smaller than 1, then we have proofed that the function is below 1. # The singular values of the 2 × 2 matrix _ fan ay A= ( a2, a22 ) (68) are (Ven + G92)? + (aa1 — diz)? + V/(ar1 — 22)? + (a2 4 an)’) ; 5 (Ven + G99)? + (oq — 42)? — V(a11 = G22)? + (aio + an’) 1 2 1 2 # SL =
1706.02515#93
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
94
1 2 1 2 # SL = s2 = . (60) 20 (59) We used an explicit formula for the singular values [4]. We now set H11 = a11, H12 = a12, H21 = a21, H22 = a22 to obtain a formula for the largest singular value of the Jacobian depending on (µ, ω, ν, τ, λ, α). The formula for the largest singular value for the Jacobian is: (Va + Hoo)? + (Har — Hi2)? + V(Har — H22)? + (Hie + Ha’) S(µ, ω, ν, τ, λ, α) = (61) 1 =35 (Va + Foz — 2jtFi2)? + (Par — 20a — Fiz)? + V(Tu = Jaz + 2tTi2)? + (Sia + Jar — 2iTuay?) ; where J are defined in Eq. (54) and we left out the dependencies on (µ, ω, ν, τ, λ, α) in order to keep the notation uncluttered, e.g. we wrote J11 instead of J11(µ, ω, ν, τ, λ, α).
1706.02515#94
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
95
Bounds on the derivatives of the Jacobian entries. In order to bound the gradient of the singular value, we have to bound the derivatives of the Jacobian entries J11(µ, ω, ν, τ, λ, α), J12(µ, ω, ν, τ, λ, α), J21(µ, ω, ν, τ, λ, α), and J22(µ, ω, ν, τ, λ, α) with respect to µ, ω, ν, and τ . The values λ and α are fixed to λ01 and α01. The 16 derivatives of the 4 Jacobian entries with respect to the 4 variables are: ; 2(a-1) 2,2 wwtvr)2 =(a OAu _ by? _# usury? (“**) - V2 (62) ee wr ae 27 Ou JQ /vT SVT
1706.02515#95
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
96
_ by? _# (“**) - ee wr ae 27 Ou JQ /vT SVT 2 a 1 a2 [ /Fla- Dew puter)? OSs = =) (-e ee | VE a + ye“ rte (™ +ur Ow 2 UT V2/ur 2. /vT (am) ) erfc Vv2Vur OI _ 1 awa? (uwpury?® oe (po tut) v2 (a-l)pw a oy qrTwe (~ ete (Me) bale (rps? Vit OF _ \ywen ee (ae te ente (HEAT) | 2((a-l)w a Or 4 : “ Vv2yor) | Va \ (vr)3/? JT Of2 _ OA Ou Ov OPi2 _ x — weet (usbvr)? fe ( He tur) | v2 (a — 1)pw a Fo 7 glee ae erfc Viger) Va nse Vit 2 we? pwr Oz — 1),-32 (arte" ve 2 erfe Ga + ) Ov 8 J2vT 2 ((-I(a-Dprw | Vr(at+apw-1)— ar3/? T v)2\/F ' p3/2 Vv Oz Lo Pw? (
1706.02515#96
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
99
gg (uwt2ur)? 2a? ud + QUT Qare “nr ea erfe ( + —erfe V2Vvr Oa =. Gi +1) (- Je we ene (Me Ow a(Quw +1)e 2 e~ = erfe (uwt2v7)2 pw? Ga + 2Qur V2 vt s(n) «Bem OJa yg Pu? ( 2 ( “a (4) = —}\ 0 Dur —e Qur rfc + Ov 3 TWE a e€ er) Vivir doze 6 fe (SS) ,; Vente -1) OFa1 = 1 ewe (« (<8) ext “() * or 2 V2 /0T 2 dodo enfe (4 + =) ; V2(-1) )(e?-1 ae yi OP22, _ OTn Ou Ov OSa2 1 aire (« (-") erfe (“ + ) 4 Ow 2 V2/uT dade enfe 2 (mee) | Vey OFa2 = 12, 26- co (« (-*) erfc (“*) + Ov 4 V2 sur 802 (wusbave)? erfe pw + 2vT v2 (a? — 1) pw are ur t - V2VvT
1706.02515#99
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
101
+ 2 + # 3α2 √ ντ # e− µ2 ω2 # 2ντ erfc # ( pw V2 # ντ # (pw + QUT | ——— V2/vT √ )) + Lemma 5 (Bounds on the Derivatives). The following bounds on the absolute values of the deriva- tives of the Jacobian entries J11(µ, ω, ν, τ, λ, α), J12(µ, ω, ν, τ, λ, α), J21(µ, ω, ν, τ, λ, α), and J22(µ, ω, ν, τ, λ, α) with respect to µ, ω, ν, and τ hold: OF Ou OF Ow <_0.0031049101995398316 (63) <_ 1.055872374194189 22 + (63) OF <_0.031242911235461816 Ov a oF < 0.03749149348255419 a oie < 0.031242911235461816 (2a Siz < 0.031242911235461816 Ow a oie < 0.21232788238624354 os < 0.2124377655377270
1706.02515#101
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
102
a oF < 0.02220441024325437 (2a a Ja) — 1.146955401845684 Ow a oF < 0.14983446469110305 992] — 9.17980135762932363 Or a 222) — 9 44983446469110305 Ou a S22) — 9 44983446469110305 Ow a 222) — 1 395740052651535 Ov OSes < 2.396685907216327 Proof. See proof 39. Bounds on the entries of the Jacobian. Lemma 6 (Bound on J11). The absolute value of the function Ju = gdw (acme erfc (43) —erfe (4) + 2) is bounded by |Ji1| < 0.104497 in the domain -0.1 <u < 0.1, -0.1 ¢<w <0.L08 cv < 1.5, and0.8 <7 < 1.25 fora = a1 and X = Xo. Proof.
1706.02515#102
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
103
Proof. 1 ort og (Hw+tvr jus Ju| = |=Aw | aet"’+ ® erfe ( ) + 2 — erfe ( )) \Aa| F ( V2 /vT V2 /vT 1 < [5 ||Allvl (Jal0.587622 + 1.00584) < 0.104497, 23 where we used that (a) Ji; is strictly monotonically increasing in jw and |2 — erfc ( 9.01 ) |< V2V0T 1.00584 and (b) Lemmal47}hat jet +> erfe (4) | < O14 erfe (.giess) = 0.587622 Lemma 7 (Bound on J12). The absolute value of the function 2 ww? Jo = $Ar (acme erfc (44) —(a—1) ae) is bounded by |J12| < 0.194145 in the domain -0.1< w<0.1,-0.1<w <0.1L0.8<y < 1.5, and0.8 <7 < 1.25 fora = api and X = Xo. Proof.
1706.02515#103
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
104
Proof. |Ji2| < dale act“+ > erfe Me Fer (a — 1) 2 se < BIS | : V2 /vT TUT “ 1 qAlll |0.983247 — 0.392294| < 0.194035 V2/ur 2? . . . woe the second term 0.582677 < 4/ oe ‘x= < 0.997356, which can easily be seen by maximizing or minimizing the arguments of the exponential or the square root function. The first term scaled by a is 0.727780 < ae’t © erfe (44) < 0.983247 and the second term scaled by a — 1 is (24,2 0.392294 < (a — 1),/ ae ne < 0.671484. Therefore, the absolute difference between these terms is at most 0.983247 — 0.392294 leading to the derived bound. For the first term we have 0.434947 < e#’+F erfe (424) < 0.587622 after Lemmal#7}and for
1706.02515#104
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
105
Bounds on mean, variance and second moment. For deriving bounds on ˜µ, ˜ξ, and ˜ν, we need the following lemma. Lemma 8 (Derivatives of the Mapping). We assume α = α01 and λ = λ01. We restrict the range of the variables to the domain µ ∈ [−0.1, 0.1], ω ∈ [−0.1, 0.1], ν ∈ [0.8, 1.5], and τ ∈ [0.8, 1.25]. # The derivative ∂ The derivative ∂ ∂µ ˜µ(µ, ω, ν, τ, λ, α) has the sign of ω. The derivative ∂ ∂ν ˜µ(µ, ω, ν, τ, λ, α) is positive. The derivative ∂ ∂µ ˜ξ(µ, ω, ν, τ, λ, α) has the sign of ω. The derivative ∂ ∂ν ˜ξ(µ, ω, ν, τ, λ, α) is positive.
1706.02515#105
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
107
Proof. We use Lemmal§|which states that with given sign the derivatives of the mapping Eq. (4) and Eq. (5) with respect to v and y are either positive or have the sign of w. Therefore with given sign of w the mappings are strict monotonic and the their maxima and minima are found at the borders. The minimum of {i is obtained at zw = —0.01 and its maximum at jw = 0.01 and o and 7 at minimal or maximal values, respectively. It follows that —0.041160 < fi(—0.1, 0.1, 0.8, 0.8, Ao1, 201) <f < fa(0.1, 0.1, 1.5, 1.25, Aor, ao1) < 0.087653. (66) 24 (65) Similarly, the maximum and minimum of ˜ξ is obtained at the values mentioned above: 0.703257 < €(—0.1, 0.1, 0.8, 0.8, Aor, 01) <E < E(0.1, 0.1, 1.5, 1.25, Aor, 01) < 1.643705. (67) Hence we obtain the following bounds on ˜ν:
1706.02515#107
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
108
Hence we obtain the following bounds on ˜ν: 0.703257 − ˜µ2 < ˜ξ − ˜µ2 < 1.643705 − ˜µ2 0.703257 − 0.007683 < ˜ν < 1.643705 − 0.007682 0.695574 < ˜ν < 1.636023. (68) Upper Bounds on the Largest Singular Value of the Jacobian. Lemma 10 (Upper Bounds on Absolute Derivatives of Largest Singular Value). We set α = α01 and λ = λ01 and restrict the range of the variables to µ ∈ [µmin, µmax] = [−0.1, 0.1], ω ∈ [ωmin, ωmax] = [−0.1, 0.1], ν ∈ [νmin, νmax] = [0.8, 1.5], and τ ∈ [τmin, τmax] = [0.8, 1.25]. The absolute values of derivatives of the largest singular value S(µ, ω, ν, τ, λ, α) given in Eq. (61) with respect to (µ, ω, ν, τ ) are bounded as follows:
1706.02515#108
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
109
# ∂S ∂µ ∂S ∂ω ∂S ∂ν ∂S ∂τ < 0.32112 , (69) < 2.63690 , (70) < 2.28242 , (71) < 2.98610 . (72) Proof. The Jacobian of our mapping Eq. (4) and Eq. (5) is defined as —~ (Hu He )\_ Su Tia H= ( Hoi Hee ) ~ ( Ja — 26tTi1 P22 — 22 ) (73) and has the largest singular value S(u,w,u,7,A,a) = 5 (Ven Hoo)? + (Haz + Hai)? + V(Hi + Hea)? 4 (ia — Hai)’) (74) according to the formula of Blinn [4]. # We obtain | Os OH | Os 1 Hi — Hoe Hi + H22 ~|\< OH VJ (Hur — Ho2)? + (Hie + Ha)? (Har + Haz)? + (Hai — Hie)? (75) 1 141 t < =1 (HiztHa1)? | (Har—Hiz)? 4 4 2 (Hu—Ha2)? | (Har FHa22)? and analogously
1706.02515#109
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
110
1 141 t < =1 (HiztHa1)? | (Har—Hiz)? 4 4 2 (Hu—Ha2)? | (Har FHa22)? and analogously | Os 1 Hiz + Hai _ Ha — Haz <1 OHA2 2 \ Jit — Hoe)? + (Hie + Har)? (Har + Hoa)? + (Har — Hi2)? (76) | 25 , and | Os 1 Hoi — Hi2 Haz + Har = |5 <4 <1 OH21 2 \ /(Hir + Haz)? + (Hor — Haz)? \/(Haa — Ho2)? + (Hi2 + Hai)? (77) and | os 1 Hii + Ho Hi — Ho = _— <1. OH22 2 \ /(Hir + Haz)? + (Hor — Haz)? \/(Ha — Ho2)? + (Hi2 + Hai)? (78) We have # ∂S ∂µ ∂S ∂ω ∂S ∂ν ∂S ∂τ ∂S ∂H11 ∂S ∂H11 ∂S ∂H11 ∂S ∂H11 ∂H11 ∂µ ∂H11 ∂ω ∂H11 ∂ν ∂H11 ∂τ
1706.02515#110
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
111
∂H11 ∂µ ∂H11 ∂ω ∂H11 ∂ν ∂H11 ∂τ ∂S ∂H12 ∂S ∂H12 ∂S ∂H12 ∂S ∂H12 ∂H12 ∂µ ∂H12 ∂ω ∂H12 ∂ν ∂H12 ∂τ ∂S ∂H21 ∂S ∂H21 ∂S ∂H21 ∂S ∂H21 ∂H21 ∂µ ∂H21 ∂ω ∂H21 ∂ν ∂H21 ∂τ ∂S ∂H22 ∂S ∂H22 ∂S ∂H22 ∂S ∂H22 ∂H22 ∂µ ∂H22 ∂ω ∂H22 ∂ν ∂H22 ∂τ = + + + (79) = + + + (80) = + + + (81) = + + + (82) (83) from which follows using the bounds from Lemma 5:
1706.02515#111
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
112
= + + + (81) = + + + (82) (83) from which follows using the bounds from Lemma 5: Derivative of the singular value w.r.t. ju: os (84) Ou OS ||OHu| | OS ||OHi2 AS ||OHal | OS ||AH2x» OH Ou "| OHa2 Ou "| OHo1 Ou "| OH» Ou OHu| , |AHi2| , |OHar OH22 ou | | Ou | | On Ou OF OSi2| | |OFo1 — 2nur| , |OSo2 — 2Ti2) — Ou On | Ou Ou > OF OPi2 OPar OF22 OA | | ~ 2 OAi2| | ~ t t +t t t t < Ou | Ou | Ou Ou 2 Ou Wil +2 |u| +2 Ou (| + 2| ial |Fan| 0.0031049101995398316 + 0.031242911235461816 + 0.02220441024325437 + 0.14983446469110305+ 2- 0.104497 - 0.087653 + 2 - 0.104497?+ 2 - 0.194035 - 0.087653 + 2 - 0.104497 - 0.194035 < 0.32112,
1706.02515#112
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
113
where we used the results from the lemmata 5, 6, 7, and 9. Derivative of the singular value w.r.t. w: os ao| < OS ||AHu| | OS ||OH.2 AS ||OHal | OS ||AH2x» OH || Ow + aos dw | |OHa|| dw + ri Ow Hu] | |OMa2| _, |OHa| _ |PH22| dw | | dw | | dw dw | ~ Ofua| ee ; ee ; [ee < dw | | dw | Ow Ow ~ (85) 26 OJu1| ,|OA2| , | Oa OJ22| , 9 OF lal + 2|Tul Oft| , dw | | dw | | dw Ow | | Ow Blt MBG] | OAi2| | ~ Oj 2 Ow |jt| + 2| Ars] ao < (86) 2.38392 + 2 · 1.055872374194189 · 0.087653 + 2 · 0.1044972 + 2 · 0.031242911235461816 · 0.087653 + 2 · 0.194035 · 0.104497 < 2.63690 ,
1706.02515#113
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
114
where we used the results from the lemmata 5, 6, 7, and 9 and that ˜µ is symmetric for µ, ω. Derivative of the singular value w.r.t. v: os ay < (87) aS ||OHu| , | aS ||| | OS ||OHa| | | OS | | Ha» OH || Ov | |OHi2|| Av |” |OHa|| Ov | * |OHs2|] dv OHA OHi2 OH21 OH22 < ov | | av ov | | av | ~ OFu ae ; | — Fr ; | — 2nFi2 < Ov Ov Ov Ov ~ OSs ee | Oat | OSes . 2|°oe |i] + 2|Fir| | Fiz| +2 Oia || +2|Tis|? < 2.19916 + 2 - 0.031242911235461816 - 0.087653 + 2 - 0.104497 - 0.194035+ 2 - 0.21232788238624354 - 0.087653 + 2- 0.194035? < 2.28242 ; where we used the results from the lemmata 5, 6, 7, and 9. Derivative of the singular value w.r.t. τ :
1706.02515#114
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
115
where we used the results from the lemmata 5, 6, 7, and 9. Derivative of the singular value w.r.t. τ : os ar| < (88) OS ||OHu OS ||OHi2 OS ||OHa1| | OS ||OH22 OHu|| Or | |AHie2|| Ar |" |AHal| ar | | | ss ar OHi1| | |OHi2| , |OH21| | | OH22 < Or || ar | | dr | | ar | * OF Ofi2| , | = 2p | — 262 < Or Or |- Or Or ~ OS OD2 OJa1 OJ22 OFu|~ Oj Or Or | Or | Or Or Vel + 21 ual Or 2 OSs l#| + 2| Fra oh < (89) 2.82643 + 2 · 0.03749149348255419 · 0.087653 + 2 · 0.104497 · 0.194035+ 2 · 0.2124377655377270 · 0.087653 + 2 · 0.1940352 < 2.98610 , where we used the results from the lemmata 5, 6, 7, and 9 and that ˜µ is symmetric for ν, τ .
1706.02515#115
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
116
Lemma 11 (Mean Value Theorem Bound on Deviation from Largest Singular Value). We set α = α01 and λ = λ01 and restrict the range of the variables to µ ∈ [µmin, µmax] = [−0.1, 0.1], ω ∈ [ωmin, ωmax] = [−0.1, 0.1], ν ∈ [νmin, νmax] = [0.8, 1.5], and τ ∈ [τmin, τmax] = [0.8, 1.25]. The distance of the singular value at S(µ, ω, ν, τ, λ01, α01) and that at S(µ + ∆µ, ω + ∆ω, ν + ∆ν, τ + ∆τ, λ01, α01) is bounded as follows: |S(µ + ∆µ, ω + ∆ω, ν + ∆ν, τ + ∆τ, λ01, α01) − S(µ, ω, ν, τ, λ01, α01)| < 27
1706.02515#116
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
118
from which immediately follows that S(ut+ Ap,w + Aw,v + Av,r + Ar, X01, 001) — S(u,w,v,7, A01, 001)| < (92) 8 (w+ tAp,w + tAw,v + tdv,r + tAr,ro1,001)| [Apel + i os 9) (w+ tAp,w + tAw,v + tdv,r + tAr, ro1,001)| |Aw| + Ow os 3 (w+ tAp,w + tAw,v + tAv,r + tAr, ro1,001)| [Av] + Vv os 5 (w+ tAp,w + tAw,v + tAdv,r + tAr,ro1,001)} |Az| . 7 We now apply Lemma 10 which gives bounds on the derivatives, which immediately gives the statement of the lemma. Lemma 12 (Largest Singular Value Smaller Than One). We set α = α01 and λ = λ01 and restrict the range of the variables to µ ∈ [−0.1, 0.1], ω ∈ [−0.1, 0.1], ν ∈ [0.8, 1.5], and τ ∈ [0.8, 1.25].
1706.02515#118
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
119
The the largest singular value of the Jacobian is smaller than 1: S(µ, ω, ν, τ, λ01, α01) < 1 . (93) Therefore the mapping Eq. (4) and Eq. (5) is a contraction mapping. Proof. We set ∆µ = 0.0068097371, ∆ω = 0.0008292885, ∆ν = 0.0009580840, and ∆τ = 0.0007323095. According to Lemma 11 we have |S(µ + ∆µ, ω + ∆ω, ν + ∆ν, τ + ∆τ, λ01, α01) − S(µ, ω, ν, τ, λ01, α01)| < 0.32112 · 0.0068097371 + 2.63690 · 0.0008292885+ 2.28242 · 0.0009580840 + 2.98610 · 0.0007323095 < 0.008747 .
1706.02515#119
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
120
For a grid with grid length ∆µ = 0.0068097371, ∆ω = 0.0008292885, ∆ν = 0.0009580840, and ∆τ = 0.0007323095, we evaluated the function Eq. (61) for the largest singular value in the domain µ ∈ [−0.1, 0.1], ω ∈ [−0.1, 0.1], ν ∈ [0.8, 1.5], and τ ∈ [0.8, 1.25]. We did this using a computer. According to Subsection A3.4.5 the precision if regarding error propagation and precision of the implemented functions is larger than 10−13. We performed the evaluation on different operating systems and different hardware architectures including CPUs and GPUs. In all cases the function Eq. (61) for the largest singular value of the Jacobian is bounded by 0.9912524171058772. We obtain from Eq. (94): S(wt Ap,w + Aw,yv + Av,7 + At, Aoi, 01) < 0.9912524171058772 + 0.008747 < 1. (95) 28 (94) # A3.4.2 Lemmata for proofing Theorem 1 (part 2): Mapping within domain
1706.02515#120
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
121
28 (94) # A3.4.2 Lemmata for proofing Theorem 1 (part 2): Mapping within domain We further have to investigate whether the the mapping Eq. (4) and Eq. (5) maps into a predefined domains. Lemma 13 (Mapping into the domain). The mapping Eq. (4) and Eq. (5) map for α = α01 and λ = λ01 into the domain µ ∈ [−0.03106, 0.06773] and ν ∈ [0.80009, 1.48617] with ω ∈ [−0.1, 0.1] and τ ∈ [0.95, 1.1]. Proof. We use Lemma 8 which states that with given sign the derivatives of the mapping Eq. (4) and Eq. (5) with respect to α = α01 and λ = λ01 are either positive or have the sign of ω. Therefore with given sign of ω the mappings are strict monotonic and the their maxima and minima are found at the borders. The minimum of ˜µ is obtained at µω = −0.01 and its maximum at µω = 0.01 and σ and τ at their minimal and maximal values, respectively. It follows that:
1706.02515#121
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
122
—0.03106 < ji(—0.1, 0.1, 0.8, 0.95, Ao1, a01) <f < fu(0.1, 0.1, 1.5, 1.1, Ao1, ao1) < 0.06773, (96) and that ˜µ ∈ [−0.1, 0.1]. Similarly, the maximum and minimum of ˜ξ( is obtained at the values mentioned above: 0.80467 < €(—0.1, 0.1, 0.8, 0.95, Ao1, 01) <E < E(0.1, 0.1, 1.5, 1.1, Ag, a1) < 1.48617. (97) Since | ˜ξ − ˜ν| = |˜µ2| < 0.004597, we can conclude that 0.80009 < ˜ν < 1.48617 and the variance remains in [0.8, 1.5].
1706.02515#122
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
123
Corollary 14. The image g(9’) of the mapping g : (u,v) + (jt,%) (Eq. B)) and the domain Y = {(p,v)|-0.1 <p < 0.1,0.8 <p < 1.5} is a subset of O': gM) oe, (98) for all ω ∈ [−0.1, 0.1] and τ ∈ [0.95, 1.1]. Proof. Directly follows from Lemma 13. # A3.4.3 Lemmata for proofing Theorem 2: The variance is contracting Main Sub-Function. We consider the main sub-function of the derivate of second moment, J22 (Eq. (54)): 26 = ty, (-creer erfc (“ + 7) + 207620 42"7 erfe (4 + —) — erfe ( aad ) + 2) ave 2 Vijir Viyir Vi jir (99) that depends on µω and ντ , therefore we set x = ντ and y = µω. Algebraic reformulations provide the formula in the following form:
1706.02515#123
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
124
that depends on µω and ντ , therefore we set x = ντ and y = µω. Algebraic reformulations provide the formula in the following form: Oz 1,9 2(_-# jen? (yt@\ 4, Grew? (y+ Qa . y ; ays =r (a ( e \(¢ ente (T=) 2e este (=) arte (=) +2) For A = Ao and a = ao , we consider the domain -1 <u < 1,-0.1 <w <01,15<v< 16, and, 0.8 <7 < 1.25. For x and y we obtain: 0.8-1.5 = 1.2 <2 < 20=1.25-16and0.1-(—1) =-0.1l<y<01= 0.1 - 1. In the following we assume to remain within this domain. 29
1706.02515#124
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
125
f(1.2,y) # y # Q (x+y)2 2x Q Figure A3: Left panel: Graphs of the main subfunction f(x,y) = ee erfe (#4) - (22+y)” ot . oo. . . . . 2e— — erfe ( 2ery ) treated in Lenina The function is negative and monotonically increasing Vv2Va with x independent of y. Right panel: Graphs of the main subfunction at minimal x = 1.2. The graph shows that the function f (1.2, y) is strictly monotonically decreasing in y. Lemma 15 (Main subfunction). For 1.2 <x < 20 and—-0.1 < y < 0.1, the function x+y)? ety)? 2. eo enfe (F) — 26 erfe (4) (101) is smaller than zero, is strictly monotonically increasing in x, and strictly monotonically decreasing in y for the minimal x = 12/10 = 1.2. Proof. See proof 44.
1706.02515#125
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
126
Proof. See proof 44. The graph of the subfunction in the specified domain is displayed in Figure[A3| Theorem 16 (Contraction v-mapping). The mapping of the variance (p,w,v,T,,@) given in Eq. is contracting for X = Aoi, & = agi and the domain Qt: -0.1< w<0.1 -01<w<0.1 15<v< 16, and0.8 < Tr < 1.25, that is, <1. (102) | oun, w,V,T, Ao1, a1) Proof. In this domain Ω+ we have the following three properties (see further below): ∂ ∂ν ˜µ > 0, and ∂ Oo -| ae < <1 (103) ln av” Oz 5.0. ln - hay ˜ξ < 1 in an even larger domain that fully contains Ω+. According to • We first proof that ∂ ∂ν Eq. (54), the derivative of the mapping Eq. (5) with respect to the variance ν is
1706.02515#126
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
127
• We first proof that ∂ ∂ν Eq. (54), the derivative of the mapping Eq. (5) with respect to the variance ν is Ox Bp S(t Ws 4s T Aor 101) = (104) 1). 2 (pot YE [we bur sr («a ( e ) erfe “Ja JaF = + + 2vt pu Qa? ePHot2U7 orfc (“*) — erfc ( ) + 2) . Jur V2 vt 30 For \ = Ani, a= a01, -l<uw<l,-01<w<0115<¢y < 16,and0.8 <7 < 1.25, we first show that the derivative is positive and then upper bound it. According to Lemmal|I5] the expression (uwtur)? pis + UT on ane (Me) (uwtur)? pis + UT 9 uuet2u7)2 exfe (“ + =) (105) on ane (Me) - ‘ Vie is negative. This expression multiplied by positive factors is subtracted in the derivative Eq. (104), therefore, the whole term is positive. The remaining term 2 − erfc ντ (106)
1706.02515#127
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
129
1.5 2 cop UE ~ [pw + UT prot (<4 (-e +3 ) erfe Gas + (107) Qa 22? *2"7 erfe (“) — erfe (+) + 2) = shar (<8 (-- =) (se erfc (“*) - oe erfc (“*)) — erfc () + 2) < pian (a8 (8) (ee ee (MEO) - ys 9 (uw ave)? f (“ 2) f ( pw ) 42 evr erfe | ———— | } - erfc Jur Jur 1 : ratory? 1240.1 =1.2531 (<4 («! ina) erfe (5) — 2 v2Vv12 > 2- =+*)) ( | ( pu ) ) Qe\ v2vI2/ erfe ( ———_— —e w= }| —erfe +2) < ( V2V1.2 V2 fur * 1 1.240.1)? 1.2+0.1 *1.95)2 (e008 (« Lets) erfc C3) _ 01 o1 V2V1.2 *12)) oe ha) « 1 120.1)? 1.2+0.1 =1.2531 (-e%08, (« ana)
1706.02515#129
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
131
We explain the chain of inequalities: – First equality brings the expression into a shape where we can apply Lemma 15 for the the function Eq. (101). – First inequality: The overall factor τ is bounded by 1.25. – Second inequality: We apply Lemma 15. According to Lemma 15 the function Eq. (101) is negative. The largest contribution is to subtract the most negative value of the function Eq. (101), that is, the minimum of function Eq. (101). According to Lemma 15 the function Eq. (101) is strictly monotonically increasing in x and strictly monotonically decreasing in y for x = 1.2. Therefore the function Eq. (101) has its minimum at minimal x = ντ = 1.5 · 0.8 = 1.2 and maximal y = µω = 1.0 · 0.1 = 0.1. We insert these values into the expression. 31 (107) – Third inequality: We use for the whole expression the maximal factor e− µ2ω2 2ντ < 1 by setting this factor to 1.
1706.02515#131
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
132
31 (107) – Third inequality: We use for the whole expression the maximal factor e− µ2ω2 2ντ < 1 by setting this factor to 1. – Fourth inequality: erfc is strictly monotonically decreasing. Therefore we maximize its argument to obtain the least value which is subtracted. We use the minimal x = ντ = 1.5 · 0.8 = 1.2 and the maximal y = µω = 1.0 · 0.1 = 0.1. # – Sixth inequality: evaluation of the terms. • We now show that ˜µ > 0. The expression ˜µ(µ, ω, ν, τ ) (Eq. (4)) is strictly monoton- ically increasing im µω and ντ . Therefore, the minimal value in Ω+ is obtained at ˜µ(0.01, 0.01, 1.5, 0.8) = 0.008293 > 0. • Last we show that ∂ ∂ν ˜µ > 0. The expression ∂ can we reformulated as follows: ∂ν ˜µ(µ, ω, ν, τ ) = J12(µ, ω, ν, τ ) (Eq. (54))
1706.02515#132
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
133
(µω+ντ )2 2ντ λτ e− µ2ω2 2(α−1) √ ντ − erfc √ 4 παe 2ντ J12(µ, ω, ν, τ, λ, α) = (108) # ncaa ( √ √ is larger than is larger than zero when the term zero. This term obtains its minimal value at µω = 0.01 and ντ = 16 · 1.25, which can easily be shown using the Abramowitz bounds (Lemma 22) and evaluates to 0.16, therefore J12 > 0 in Ω+. # A3.4.4 Lemmata for proofing Theorem 3: The variance is expanding
1706.02515#133
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
134
# A3.4.4 Lemmata for proofing Theorem 3: The variance is expanding Main Sub-Function From Below. We consider functions in pw and v7, therefore we set x = pw and y = vr. For A = Xo1 and a = ao1, we consider the domain —0.1 < pw < 0.1, —0.1 < w < 0.1 0.00875 < vy < 0.7, and 0.8 < 7 < 1.25. For x and y we obtain: 0.8 - 0.00875 = 0.007 < x < 0.875 = 1.25-0.7 and 0.1-(—0.1) = —0.01 < y < 0.01 = 0.1 - 0.1. In the following we assume eto be within this domain. In this domain, we consider the main sub-function of the derivate of second moment in the next layer, J22 (Eq. (54): O- 1 2 O- 1 urn , ; 2 . S¢ = =r (-crers erfc (<*) + 2076242" erfe () — erfc ( V2 /vT J2/vT Vir (109)
1706.02515#134
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
135
that depends on µω and ντ , therefore we set x = ντ and y = µω. Algebraic reformulations provide the formula in the following form: 0: = 110 Ov (110) (8) ($8) a oe (8) a (gh) 2 V2/x Jr Lemma 17 (Main subfunction Below). For 0.007 < x < 0.875 and —0.01 < y < 0.01, the function rw? —, (at+y (22+)? i) e 2 erfe | —-— ] —2e° =~ erfe | —— 111 (5:2) - (St my smaller than zero, is strictly monotonically increasing in x and strictly monotonically increasing in y for the minimal x = 0.007 = 0.00875 · 0.8, x = 0.56 = 0.7 · 0.8, x = 0.128 = 0.16 · 0.8, and x = 0.216 = 0.24 · 0.9 (lower bound of 0.9 on τ ). 32
1706.02515#135
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
136
32 Proof. See proof|45] Lemma 18 (Monotone Derivative). For 4 = 01, @ = a1 and the domain —0.1 < w < 0.1, —0.1 <w < 0.1, 0.00875 < v < 0.7, and 0.8 < T < 1.25. We are interested of the derivative of (484 2 [pwtur wot dur)? juw + 2vT 5 vat) re(! ) -2el Be) we(Sr)) . 112 r(e erfc Visor e erfc ViJur (112) The derivative of the equation above with respect to • ν is larger than zero; e 7 is smaller than zero for maximal v = 0.7, v = 0.16, and v = 0.24 (with 0.9 < T); • y = µω is larger than zero for ντ = 0.008750.8 = 0.007, ντ = 0.70.8 = 0.56, ντ = 0.160.8 = 0.128, and ντ = 0.24 · 0.9 = 0.216. Proof. See proof 46. # A3.4.5 Computer-assisted proof details for main Lemma 12 in Section A3.4.1.
1706.02515#136
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
137
Proof. See proof 46. # A3.4.5 Computer-assisted proof details for main Lemma 12 in Section A3.4.1. Error Analysis. We investigate the error propagation for the singular value (Eq. (61) if the function arguments jy, w, 1,7 suffer from numerical imprecisions up to e. To this end, we first derive error propagation rules based on the mean value theorem and then we apply these rules to the formula for the singular value. Lemma 19 (Mean value theorem). For a real-valued function f which is differentiable in the closed interval a, b], there exists t € (0, 1] with f (a) − f (b) = ∇f (a + t(b − a)) · (a − b) . (113) It follows that for computation with error ∆x, there exists a t ∈ [0, 1] with [fl@+ Aa) — f(x)| < ||Vf(@+tAx)|| Aa] . (114) Therefore the increase of the norm of the error after applying function f is bounded by the norm of the gradient ||V f(a + tAa)|l. We now compute for the functions, that we consider their gradient and its 2-norm: addition:
1706.02515#137
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
138
We now compute for the functions, that we consider their gradient and its 2-norm: addition: addition: f(a) =x, + x and Vf (a) = (1,1), which gives ||V f(a)|| = V2. We further know that |f(@+ Aw) — f(x)| = |ar +a + Av, + Avg — 2 — 2] < |Axi| + |Axo| . (115) Adding n terms gives: n n n So ai + An; - Soa < So Aa: < n|Aril nas « (116) i=1 i=l i=l subtraction: √ f(x) = a1 — 2 and V f(x) = (1,1), which gives ||V f(x)|| = V2. We further know that |f(w + Aa) — f(x) = |x) — 22 + Ary — Arg — 2 4+ 22| < [Axi] + |Aro| - (117) Subtracting n terms gives: n n So =(#i + Axi) + Son < So Asi < n|Azilnax + (118) i=1 i=l i=1 33 multiplication:
1706.02515#138
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
140
e division: f(z) = 2 and Vf (x) = (4,-3). which gives ||V f(a)|| = tl. We further know that a+Ar, v4 (a1 + Axy)a — 21 (x2 + Ara) + A - If(w + Aw) — f(x)| ta+Arg 22 (xo + Axe)x2 (121) An wg — Arg: 21 Ax, _ Ara r , o(A2) ; x3 + Axo - x2 XQ x3 @ square root: f(a) = Vand f'(z) = Eee which gives | f’(x)| = we © exponential function: f(x) = exp(x) and f’(x) = exp(zx), which gives | f’(x)| = exp(z). e error function: f(a) =erf(x) and f’(x) = ae exp(—2?), which gives | f’(2)| = ae exp(—2?). e complementary error function: f(x) = erfe(x) and f’(x) = -z exp(—2”), which gives | f’(x)| = Fa exp(—# 2).
1706.02515#140
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
141
Lemma 20. /f the values j1,w,v,7 have a precision of ¢, the singular value (Eq. (61p) evaluated with the formulas given in Eq. 4) and Eq. (61) has a precision better than 292¢. This means for a machine with a typical precision of 2~°? = 2.220446 - 10-16, we have the rounding error € © 10~1%, the evaluation of the singular value (Eq. (61)) with the formulas given in Eq. and Eq. (61) (61) has a precision better than 10-18 > 292e. Proof. We have the numerical precision € of the parameters f1,w,v,7, that we denote by Ap, Aw, Av, Ar together with our domain 2. With the error propagation rules that we derived in Subsection A3.4.5, we can obtain bounds for the numerical errors on the following simple expressions:
1706.02515#141
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
143
∆ 34 (122) A (v2) < A < x A (V2Ver) < V2A (Vor) + urd (V2) < V2-1875€ + 1.5-1.25- i < 3.5¢ (+) < (A (qs) V2V0F + |e A (v2vo7)) —+—, as < 0.2eV/2V0.64 + 0.01 - 3.5e) au < 0.25¢ A (“*) < (4 (ue + v7) V2V07 + |p + vT| A (v2ver)) wa < (3.2«v2V0.64 + 1.885 - 3.5¢) < 8¢. 2- 0.64 Using these bounds on the simple expressions, we can now calculate bounds on the numerical errors of compound expressions:
1706.02515#143
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
144
Using these bounds on the simple expressions, we can now calculate bounds on the numerical errors of compound expressions: . plus 2 -(+4 y’ (4 pu )< A | erfe < ee \Vever] A (123) ( (=) Vr v2.07 2 yl d5e < 0.3¢ . (pw tr 2 ~(4g4z)’ (‘“*) A fc < ee \Yev) A (| ——] < 124 (ex “ Ca J2/vT )) Vir J2vT (124) 2 var < 10e A (ehh) < (eM) A (MTT) < (125) 99479 De < 5.7€ (126) Subsequently, we can use the above results to get bounds for the numerical errors on the Jacobian entries (Eq. (54)), applying the rules from Subsection A3.4.5 again: _ 1 wo (*) (4 pow )3 )) . A(Aiu) A (S (ce erfc Vive erfe jt 2 <6¢e, (127)
1706.02515#144
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
145
_ 1 wo (*) (4 pow )3 )) . A(Aiu) A (S (ce erfc Vive erfe jt 2 <6¢e, (127) and we obtain A (Ji2) < 78¢, A (Jar) < 189¢, A (Jo2) < 405¢ and A (ji) < 52€. We also have bounds on the absolute values on Jj; and ji (see Lemma|6| Lemma}7| and Lemma)9), therefore we can propagate the error also through the function that calculates the singular value (Eq. (61). A(S(u,w,V,7,A,0)) = (128) a(3 (Va + Jaz — 2ftFi2)? + (Joi — 2ftAir — Fiz)? + JV (Tu — Far + 2jt ia)? + (Tia + Tar = 2%iFu)?) ) < 292e.
1706.02515#145
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
146
Precision of Implementations. We will show that our computations are correct up to 3 ulps. For our implementation in GNU C library and the hardware architectures that we used, the precision of all mathematical functions that we used is at least one ulp. The term “ulp” (acronym for “unit in the last place”) was coined by W. Kahan in 1960. It is the highest precision (up to some factor smaller 1), which can be achieved for the given hardware and floating point representation. Kahan defined ulp as [21]: 35 “Ulp(x) is the gap between the two finite floating-point numbers nearest x, even if x is one of them. (But ulp(NaN) is NaN.)” Harrison defined ulp as [15]: “an ulp in x is the distance between the two closest straddling floating point numbers a and 8, i.e. those with a < x < band a ¥ b assuming an unbounded exponent range.” In the literature we find also slightly different definitions [29]. According to [29] who refers to [11]:
1706.02515#146
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
147
In the literature we find also slightly different definitions [29]. According to [29] who refers to [11]: “TEEE-754 mandates four standard rounding modes:” “Round-to-nearest: r(x) is the floating-point value closest to x with the usual distance; if two floating-point value are equally close to x, then r(x) is the one whose least significant bit is equal to zero.” “TEEE-754 standardises 5 operations: addition (which we shall note © in order to distinguish it from the operation over the reals), subtraction (©), multiplication (®), division (@), and also square root.” “TEEE-754 specifies em exact rounding [Goldberg, 1991, §1.5]: the result of a floating-point operation is the same as if the operation were performed on the real numbers with the given inputs, then rounded according to the rules in the preceding section. Thus, x @ y is defined as r(x + y), with x and y taken as elements of RU {-00, +00}; the same applies for the other operators.” Consequently, the IEEE-754 standard guarantees that addition, subtraction, multiplication, division, and squared root is precise up to one ulp.
1706.02515#147
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
148
Consequently, the IEEE-754 standard guarantees that addition, subtraction, multiplication, division, and squared root is precise up to one ulp. We have to consider transcendental functions. First the is the exponential function, and then the complementary error function erfc(x), which can be computed via the error function erf(x). Intel states [29]: “With the Intel486 processor and Intel 387 math coprocessor, the worst- case, transcendental function error is typically 3 or 3.5 ulps, but is some- times as large as 4.5 ulps.” According //man.openbsd.org/OpenBSD-current/man3/exp.3: to https://www.mirbsd.org/htman/i386/man3/exp.htm and http: “exp(x), log(x), expm1(x) and log1p(x) are accurate to within an ulp” which is the same for freebsd https://www.freebsd.org/cgi/man.cgi?query=exp&sektion= 3&apropos=0&manpath=freebsd:
1706.02515#148
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
149
“The values of exp(0), expm1(0), exp2(integer), and pow(integer, integer) are exact provided that they are representable. Otherwise the error in these functions is generally below one ulp.” The same holds for “FDLIBM” http://www.netlib.org/fdlibm/readme: “FDLIBM is intended to provide a reasonably portable (see assumptions below), reference quality (below one ulp for major functions like sin,cos,exp,log) math library (libm.a).” In http://www.gnu.org/software/libc/manual/html_node/ Errors-in-Math-Functions.html we find that both exp and erf have an error of 1 ulp while erfc has an error up to 3 ulps depending on the architecture. For the most common architectures as used by us, however, the error of erfc is 1 ulp. We implemented the function in the programming language C. We rely on the GNU C Library [26]. According to the GNU C Library manual which can be obtained from http://www.gnu.org/ 36 Fanation > 050 00 os TO Ts
1706.02515#149
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
150
36 Fanation > 050 00 os TO Ts Figure A4: Graphs of the upper and lower bounds on erfc. The lower bound √ π( 2e−x2 √ x2+2+x) (red), the 2e−x2 x2+ 4 e727 p . : upper bound AV) (green) and the function erfc(a) (blue) as treated in Lemma)22 software/libc/manual/pdf/libc.pdf, the errors of the math functions exp, erf, and erfc are not larger than 3 ulps for all architectures [26, pp. 528]. For the architectures ix86, i386/i686/fpu, and m68k/fpmu68k/m680x0/fpu that we used the error are at least one ulp [26, pp. 528]. # Intermediate Lemmata and Proofs
1706.02515#150
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
151
# Intermediate Lemmata and Proofs Since we focus on the fixed point (µ, ν) = (0, 1), we assume for our whole analysis that α = α01 and λ = λ01. Furthermore, we restrict the range of the variables µ ∈ [µmin, µmax] = [−0.1, 0.1], ω ∈ [ωmin, ωmax] = [−0.1, 0.1], ν ∈ [νmin, νmax] = [0.8, 1.5], and τ ∈ [τmin, τmax] = [0.8, 1.25]. For bounding different partial derivatives we need properties of different functions. We will bound a the absolute value of a function by computing an upper bound on its maximum and a lower bound on its minimum. These bounds are computed by upper or lower bounding terms. The bounds get tighter if we can combine terms to a more complex function and bound this function. The following lemmata give some properties of functions that we will use in bounding complex functions. f i e~
1706.02515#151
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
152
f i e~ Throughout this work, we use the error function erf(x) := 1√ π function erfc(x) = 1 − erf(x). Lemma 21 (Basic functions). exp(x) is strictly monotonically increasing from 0 at −∞ to ∞ at ∞ and has positive curvature. According to its definition erfc(x) is strictly monotonically decreasing from 2 at −∞ to 0 at ∞. Next we introduce a bound on erfc: Lemma 22 (Erfc bound from Abramowitz). nee enfe(s) < — (129) Va (Va? +242) ~ va (\/22+4 +2) for x > 0. Proof. The statement follows immediately from [1] (page 298, formula 7.1.13). These bounds are displayed in figure A4. 37 x'exp('2)‘erfotx) # explx"2)"erfelx) Figure A5: Graphs of the functions ex2 and Lemma 24, respectively. erfc(x) (left) and xex2 erfc(x) (right) treated in Lemma 23 Lemma 23 (Function ex2 has positive curvature (positive 2nd order derivative), that is, the decreasing slowes down.
1706.02515#152
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
153
Lemma 23 (Function ex2 has positive curvature (positive 2nd order derivative), that is, the decreasing slowes down. A graph of the function is displayed in Figure A5. # Proof. The derivative of ex2 erfc(x) is ∂ex2 erfc(x) ∂x = 2ex2 x erfc(x) − 2 √ π . (130) erfc(x) is Using Lemma 22, we get de” erfe(x) e.g x ——— = 2" rerfe(x) — < - <0 Ox (x) Jr vi ( +442) Jr Jr (131) Thus ex2 The second order derivative of ex2 erfc(x) is strictly monotonically decreasing for x > 0. ∂2ex2 erfc(x) ∂x2 = 4ex2 x2 erfc(x) + 2ex2 erfc(x) − 4x √ π . (132) Again using Lemma 22 (first inequality), we get : 2a 2( (2x? +1) ra erfe(x) — =.) > (133)
1706.02515#153
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
154
Again using Lemma 22 (first inequality), we get : 2a 2( (2x? +1) ra erfe(x) — =.) > (133) 4 (22? + 1) 4a Vi(va2+2+2) vr 4 (2? — Va? + 22 +1) Vi (Va? +240 4 (a? — Vat + 22? +1) 5 Va (Va? +242) 4 (2? — Vat + 2224141) Va (va? +242) # 4x √ π √ = > = 0 For the last inequality we added 1 in the numerator in the square root which is subtracted, that is, making a larger negative term in the numerator. 38 < 0 Lemma 24 (Properties of xex2 tonically increasing to 1√ π . erfc(x)). The function xex2 erfc(x) has the sign of x and is mono# Proof. The derivative of xex2 erfc(x) is 2ex2 x2 erfc(x) + ex2 erfc(x) − 2x √ π . (134) This derivative is positive since 2ex2 x2 erfc(x) + ex2 erfc(x) − 2x √ π = (135)
1706.02515#154
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
155
This derivative is positive since 2ex2 x2 erfc(x) + ex2 erfc(x) − 2x √ π = (135) Qa 2 (2x? + 1) Qa 2((2a? +1) —a (Va? +2+2)) 2? (442 fol) — _ oF (20+ 1) erfe(a) Vn Vive +2) va Vi (Va? +242) 2(a? —aV2? +241) 2(2? -—aVa? +241) s 2 (x? -ny/2? + +2+1) Vi (Va? +242) Vit (Va? +242) Jit (Va? +242) 2 (x? — Vat + 20? +141) 2(2- V+? +1) Vi (Ve +240) Vi (Vi 4240) 0. We apply Lemma 22 to x erfc(x)ex2 and divide the terms of the lemma by x, which gives 2 2 2 Flzvtni) < werfe(x)e” < vt (Jae t1+1) : (136) For limx→∞ both the upper and the lower bound go to 1√ # we # π .
1706.02515#155
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
156
For limx→∞ both the upper and the lower bound go to 1√ # we # π . Lemma 25 (Function µω). h11(µ, ω) = µω is monotonically increasing in µω. It has minimal value t11 = −0.01 and maximal value T11 = 0.01. # Proof. Obvious. Lemma 26 (Function ντ ). h22(ν, τ ) = ντ is monotonically increasing in ντ and is positive. It has minimal value t22 = 0.64 and maximal value T22 = 1.875. Proof. Obvious. Lemma 27 (Function µω+ντ ντ ντ and µω. It has minimal value t1 = 0.5568 and maximal value T1 = 0.9734. increasing in both # Proof. The derivative of the function µω+x√ √ x with respect to x is 2 √ 1 √ 2 x − µω + x √ 2x3/2 2 = 2x − (µω + x) √ 2 2x3/2 = x − µω √ 2x3/2 2 > 0 , (137)
1706.02515#156
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
157
since x > 0.8 · 0.8 and µω < 0.1 · 0.1. Lemma 28 (Function µω+2ντ ντ ντ and µω. It has minimal value t2 = 1.1225 and maximal value T2 = 1.9417. increasing in both # Proof. The derivative of the function µω+2x√ √ x √ √ with respect to x is 2 2 x µω + 2x √ 2x3/2 2 − = 4x − (µω + 2x) √ 2 2x3/2 = 2x − µω √ 2x3/2 2 > 0 . (138) 39 = µω√ √ 2 Lemma 29 (Function monotonically increasing in µω. T3 = 0.0088388. ντ ). h3(µ, ω, ν, τ ) = µω√ √ monotonically decreasing in ντ and It has minimal value t3 = −0.0088388 and maximal value 2 ντ Proof. Obvious. 2
1706.02515#157
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
158
Proof. Obvious. 2 has a minimum at 0 for µ = 0 or Lemma 30 (Function ω = 0 and has a maximum for the smallest ντ and largest |µω| and is larger or equal to zero. It has minimal value t4 = 0 and maximal value T4 = 0.000078126. Proof. Obvious. √ √ Lemma 31 (Function 2 π (α−1) √ ντ ). 2 π (α−1) √ ντ > 0 and decreasing in ντ . Proof. Statements follow directly from elementary functions square root and division. Lemma 32 (Function 2 — ert ( > 0 and decreasing in vt and increasing in |ww. stig) 2— (i) Proof. Statements follow directly from Lemma[21]and erfc. Lemma 33 (Function V2 ( eae — fz). For Lemma 33 (Function V2 ( eae — fz). For X = X and a = ago, V2 (Ge - ts) < 0 and increasing in both vt and jw. Proof. We consider the function V2 ( (oD - <z); which has the derivative with respect to x: 2 a 3(a — 1)pw V2 (3S ~ 99572) (139) This derivative is larger than zero, since
1706.02515#158
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
159
2 a 3(a — 1)pw V2 (3S ~ 99572) (139) This derivative is larger than zero, since V7 ( Q _ oh) > v2 (o- ee) > 0. (140) T 2(v7r)3/2 2(v7)5/2 2(v7)3/2 The last inequality follows from α − 3·0.1·0.1(α−1) 0.8·0.8 # > 0 for a = aor. We next consider the function V2 (8 - Sz) , which has the derivative with respect to x: (8 - Sz) J 2a π (α − 1) (ντ )3/2 > 0 . (141) Lemma 34 (Function V2 (< De se p —atepe tr] avr) ). The function (v UT ~1)(a—1)p2w? _ fi . . . . . . V2 (‘ Youre’ 4 —atawwtl avr) < 0 is decreasing in vt and increasing in jw. (uT)3/2 UT Proof. We define the function 2 ((-1)(a- 1)p?w? _ Tat apwt1 Vo ( aa | Ja avr (142)
1706.02515#159
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
161
40 1 V2rx/2 (3(@ = 1)pPw? = x(-a + apw +1) — ax) . The derivative of the term 3(α − 1)µ2ω2 − x(−α + αµω + 1) − αx2 with respect to x is −1 + α − µωα − 2αx < 0, since 2αx > 1.6α. Therefore the term is maximized with the smallest value for x, which is x = ντ = 0.8 · 0.8. For µω we use for each term the value which gives maximal contribution. We obtain an upper bound for the term: 3(−0.1 · 0.1)2(α01 − 1) − (0.8 · 0.8)2α01 − 0.8 · 0.8((−0.1 · 0.1)α01 − α01 + 1) = −0.243569 . (144) Therefore the derivative with respect to x = ντ is smaller than zero and the original function is decreasing in ντ We now consider the derivative with respect to x = µω. The derivative with respect to x of the function Py} (a-1)a? | -a+ar+1 is
1706.02515#161
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
162
We now consider the derivative with respect to x = µω. The derivative with respect to x of the function Py} (a-1)a? | -a+ar+1 is π (αντ − 2(α − 1)x) (ντ )3/2 Since −2x(−1 + α) + ντ α > −2 · 0.01 · (−1 + α01) + 0.8 · 0.8α01 > 1.0574 > 0, the derivative is larger than zero. Consequently, the original function is increasing in µω. The maximal value is obtained with the minimal ντ = 0.8 · 0.8 and the maximal µω = 0.1 · 0.1. The maximal value is 2 /0.1-0.1a91 —a01 +1 0.170.1?(—1)(a91 — 1) . \/ | — V0.8 +0. = —1.72296. *( 0.8 - 0.8001 72296 V0.8 - 0.8 (0.8 - 0.8)3/2 Therefore the original function is smaller than zero.
1706.02515#162
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
163
Therefore the original function is smaller than zero. F 2 ( (e?=1)mw 302 Lemma 35 (Function V2 one — az }). For X= Xo1 and a = a1, 2 ( (0° ue 308) © 0 and i ing in both d 7 “Crp _— Vor < Oan increasing in DOIN VT and [LW # Proof. The derivative of the function 2 2 T 2 [ (a? -1 302 2 ( (0% =I mw 3a (148) T 3/2 VE with respect to x is 2 V2 2 ( 3a? 3 (a? — 1) pus 3 (a2x — (a? — 1) pw) V2 (Sr 7 2x5/2 V2rx5/2 > 0, (149) since α2x − µω(−1 + α2) > α2 010.8 · 0.8 − 0.1 · 0.1 · (−1 + α2 01) > 1.77387 The derivative of the function > 2_1)2 392 2 (o? =1)e _ 302 (150) T (vr)3/2 VT with respect to x is 7 (o* ~1) ore > 0. (151)
1706.02515#163
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
164
with respect to x is 7 (o* ~1) ore > 0. (151) The maximal function value is obtained by maximal vt = 1.5 - 1.25 and the maximal pw = 0.1 - 0.1. The maximal value is V2 (So - ist) —4.88869. Therefore the function is negative. 41 (147) 21) pw Lemma 36 (Function V2 (“2+ — 3a? v7) ). The function 2 2 ( (e?=1)mw 4 9 . . . . = (2+ — 3a°\V/vT |) < 0 is decreasing in vt and increasing in jw. # Proof. The derivative of the function a a (2 - nat) (152) with respect to x is 2 = 2 (- (a? — 1) pw _ 3a? ) — (a? = 1) pw — 3022 = 273/2 2W/z Vora3/2 <0, (153) since −3α2x − µω(−1 + α2) < −3α2 010.8 · 0.8 + 0.1 · 0.1(−1 + α2 01) < −5.35764. # The derivative of the function 2 2a = (= - sat) (154) TT VT with respect to x is 2
1706.02515#164
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
165
# The derivative of the function 2 2a = (= - sat) (154) TT VT with respect to x is 2 2 (2-1) —_—— > 0. 155 VY VT ( ) The maximal function value is obtained for minimal vt = 0.8 - 0.8 and the maximal pw = 0. . 2 0.1. The value is 2 — — 3V0.8- 0804) = —5.34347. Thus, the function is negative. L 37 (Functi (wwtur)? fo ( meter Th . (wwtvr)? fo ( mectur 0i emma 37 (Function vte” 27 __ erfc (444). e function vTe~ 27 __ erfc (424) > 01s increasing in vt and decreasing in jw. Proof. The derivative of the function (wwe)? pw + x we 2 erfc 156 ( a) eo) with respect to x is aes (a(a + 2) — pw?) erfe (4g) pw — 2 . (157) Qa Vin JE This derivative is larger than zero, since peepee (vt(vT + 2) — p?w?) erfe (44) pw — VT oye Tad (158)
1706.02515#165
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
166
This derivative is larger than zero, since peepee (vt(vT + 2) — p?w?) erfe (44) pw — VT oye Tad (158) 0.4349 (vt (vt + 2) — p?w*) Quer 0.5 (vr(vT + 2) — p?w?) _ Qnvt 0.5 (vt (ut + 2) — pPw?) + √ + + # µω − ντ √ √ ντ # 2π µω − ντ √ √ 2π ντ ντ (µω − ντ ) = √ > √ # 2πντ = 42 √ √ −0.5µ2ω2 + µω −0.5µ2ω2 + µω √ ντ + 0.5(ντ )2 − ντ ντ + ντ √ = 2πντ √ ντ )2 + 0.25(ντ )2 ντ + (0.5ντ − √ 2πντ > 0 . We explain this chain of inequalities:
1706.02515#166
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
167
We explain this chain of inequalities: V2V0r is strictly monotonically decreasing. The minimal value that is larger than 0.4349 is taken on at the maximal values v7 = 1.5 - 1.25 and puw = 0.1 - 0.1. wut)? . e The first inequality follows by applying Lenin says that eats erfe (4) √ The second inequality uses 1 • The second inequality uses 1 2π = 0.545066 > 0.5. 2 0.4349 • The equalities are just algebraic reformulations. √ • The last inequality follows from −0.5µ2ω2 + µω ντ + 0.25(ντ )2 > 0.25(0.8 · 0.8)2 − √ 0.5 · (0.1)2(0.1)2 − 0.1 · 0.1 · 0.8 · 0.8 = 0.09435 > 0. Therefore the function is increasing in ντ . Decreasing in µω follows from decreasing of ex2 form the fact that erfc and the exponential function are positive and that ντ > 0. Positivity follows
1706.02515#167
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
168
Positivity follows w+2ur)? ws wher)? wQvr Lemma 38 (Function vre “tee erfe (4324). The function vrei erfe (<4 ) >0 is increasing in vt and decreasing in jw. Proof. The derivative of the function wx 2. ae = erfc (‘“S) (159) is 1+ 20)? pw Ea en (vice (2x (2x + 1) — pew Pu) exfc (4942") + Jx(pw — 22x) ) (160) no # e w+2x)? 5 3 We only have to determine the sign of (te “=~ (20 (2a + 1) — pw?) erfe (4324) +/2(pw— 2x) since all other factors are obviously larger than zero. This derivative is larger than zero, since
1706.02515#168
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
169
This derivative is larger than zero, since (nw t2u7)? fiw + 2vT Vie ~ (2vr(2v7 + 1) — pw?) erfe Gree Dor \+ VT (ww — 2vr) > (161) 0.463979 (2v7(2v7 + 1) — pw?) + JvT(uw — 2vT) = — 0.463979)? w? + pw /VT + 1.85592(vT)? + 0.927958v7 — QTrVvT = pu (/vT — 0.463979) + 0.85592(v7)? + (ut — Sur * _ 0.0720421vr > 0. We explain this chain of inequalities: e The first inequality follows by applying Lemma 23] which says _ that eee rfc wee is strictly monotonically decreasing. The minimal value that is larger than 0.261772 is taken on at the maximal values vr = 1.5 - 1.25 and paw = 0.1- 0.1. 0.261772,/7 > 0.463979. • The equalities are just algebraic reformulations. 43 √
1706.02515#169
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
170
• The equalities are just algebraic reformulations. 43 √ e The last inequality follows from pw (V/VT—0.463979nw) + 0.85592(vT)? — 0.0720421v7 > 0.85592 - (0.8 - 0.8)? — 0.1 - 0.1 (V1.5 - 1.25 + 0.1 - 0.1 - 0.463979) — 0.0720421 - 1.5 - 1.25 > 0.201766. Therefore the function is increasing in ντ . Decreasing in µω follows from decreasing of ex2 from the fact that erfc and the exponential function are positive and that ντ > 0. Positivity follows Lemma 39 (Bounds on the Derivatives). The following bounds on the absolute values of the deriva- tives of the Jacobian entries J11(µ, ω, ν, τ, λ, α), J12(µ, ω, ν, τ, λ, α), J21(µ, ω, ν, τ, λ, α), and J22(µ, ω, ν, τ, λ, α) with respect to µ, ω, ν, and τ hold:
1706.02515#170
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
171
a a < 0.0031049101995398316 (162) a OFa| — 4 5597237419419 Ow OF < 0.031242911235461816 Vv a oF < 0.03749149348255419 a oie < 0.031242911235461816 [g a Oz < 0.031242911235461816 Ow a OFis < 0.21232788238624354 a oie < 0.2124377655377270 a oF < 0.02220441024325437 [g a Ia) — 4 146955401845684 Ow a oF < 0.14983446469110305 a 292) — 9 17980135762932363 Or a 222) — 9 4.4983446469110305 Ou a OToe < 0.14983446469110305 Ow a 222) — 1 395740052651535 Ov a oes < 2.396685907216327 44
1706.02515#171
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
172
44 Proof. For each derivative we compute a lower and an upper bound and take the maximum of the absolute value. A lower bound is determined by minimizing the single terms of the functions that represents the derivative. An upper bound is determined by maximizing the single terms of the functions that represent the derivative. Terms can be combined to larger terms for which the maximum and the minimum must be known. We apply many previous lemmata which state properties of functions representing single or combined terms. The more terms are combined, the tighter the bounds can be made. Next we go through all the derivatives, where we use Lemma 25, Lemma 26, Lemma 27, Lemma 28, Lemma 29, Lemma 30, Lemma 21, and Lemma 23 without citing. Furthermore, we use the bounds on the simple expressions t11,t22, ..., and T4 as defined the aforementioned lemmata: ∂J11 ∂µ √ wtur)? Z(a—1). We use Lemma(3ifand consider the expression ae “ars erfe ( st | - VEO) in brackets. An upper bound on the maximum of is α01et2 1 erfc(t1) − π (α01 − 1) √ T22 = 0.591017 . (163) A lower bound on the minimum is
1706.02515#172
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
173
A lower bound on the minimum is α01eT 2 1 erfc(T1) − π (α01 − 1) t22 √ = 0.056318 . (164) Thus, an upper bound on the maximal absolute value is V2 ~1) 1 2 2 E o 4 4g 7 5 roiwinaxe aoe" erfe(t1) — Tn = 0.0031049101995398316 . (165) ∂J11 ∂ω √ We use Lemma and consider the expression Vee ue = a(pw + wotur)2 Ve“ a orfe (44) in brackets. An upper bound on the maximum is π (α01 − 1)T11 t22 √ − α01(t11 + 1)eT 2 1 erfc(T1) = −0.713808 . (166) A lower bound on the minimum is V2 (oo — tu π (α01 − 1)t11 t22 √ − α01(T11 + 1)et2 1 erfc(t1) = −0.99987 . (167) This term is subtracted, and 2 − erfc(x) > 0, therefore we have to use the minimum and the maximum for the argument of erfc. Thus, an upper bound on the maximal absolute value is
1706.02515#173
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
174
Thus, an upper bound on the maximal absolute value is 14, (eu (yf#on =D (Tis + Vell erfe(ts) | —erfe(Ts) +2 = —e -—a + L)e! eric(t — erfe(T: = 301 Vin ora 1 3 1.055872374194189 . 45 (168) ∂J11 ∂ν We consider the term in brackets (ustvy?® (pw tut) | [2 ((a—1)pw a ae ente (MZ) : V7 ( ors? =) : (169) # αe We apply Lemma 33 for the first sub-term. An upper bound on the maximum is . 2 ao. — LT; ay . ae! erfe(ty) 4 V2 Me a or) 0.0104167 . (170) 22 A lower bound on the minimum is ‘2D agie™ erfe(T1) 4 / Tv α01eT 2 1 erfc(T1) + (α01 − 1)t11 t3/2 22 − α01√ t22 = −0.95153 . (171) Thus, an upper bound on the maximal absolute value is
1706.02515#174
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
175
Thus, an upper bound on the maximal absolute value is 1 2) 2 ao. — L)t fay 7 TdovTimaxtmaxe!4 (oo erfe(Ti) + fo ( ate a oe) (172) 122 0.031242911235461816 . ∂J11 ∂τ We use the results of item ∂J11 bound on the maximal absolute value is ∂ν were the brackets are only differently scaled. Thus, an upper 1 t 72 _./2 f(a -Vtu aor _ dot maxWnax€ 4 (oo 1 erfc(T}) 4 V3 { BP - los (173) 0.03749149348255419 . ∂J12 ∂µ Since ∂J12 ∂µ = ∂J11 ∂ν , an upper bound on the maximal absolute value is 1 2. 2 { (aoi — Lt a ~ Hor Tmax maxe!4 (ie erfc(T;) 4 Vi ( we a se.) = (174) P22 0.031242911235461816 . ∂J12 ∂ω We use the results of item ∂J11 bound on the maximal absolute value is ∂ν were the brackets are only differently scaled. Thus, an upper
1706.02515#175
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
176
∂J12 ∂ω We use the results of item ∂J11 bound on the maximal absolute value is ∂ν were the brackets are only differently scaled. Thus, an upper 1 2 aoi — L)t a ~ PoittmaxTmnaxel™ (es erfe(T1) 4 ‘ (‘ 1 a se.) (175) 22 0.031242911235461816 . − OTi2 av For the second term in brackets, we see that a1 72 O01 TZ age erfe(t;) = 1.53644. We now check different values for v2 (-1)(a = 1)p?w? _ VT(a + apw “( V2 /F ' 3/2 mineT 2 1 erfc(T1) = 0.465793 and maxet2 1 erfc(t1) = 1.53644. √ v2 (-1)(a = 1)p?w? _ VT(a + apw — 1) ar3/2 176 “( V2 /F ' 3/2 Ve ) ; (176) 46 where we maximize or minimize all single terms. A lower bound on the minimum of this expression is
1706.02515#176
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
177
46 where we maximize or minimize all single terms. A lower bound on the minimum of this expression is [2 f (-1)(@01 = Dp iraxWrrax , VTmin (oon + aoitii — 1) exon Tate (177) 7 De Finn vale Vimin ‘min V Tmin ‘max — 1.83112. An upper bound on the maximum of this expression is 3/2 v2 (-D(@01 = DeininWmin , VTmax(@o1 + 01711 — 1) _ ante (178) 7 Vines V/Tinax we VP maxx 0.0802158 . An upper bound on the maximum is 5 3/2 1) ret vz (-1)(@01 — DpRinein _ CorTein , (179) 8 7 Vines Tina Vi nax 1 8 √ τmax(α01 + α01T11 − 1) ν3/2 min maxet2 + α01τ 2 1 erfc(t1) = 0.212328 . A lower bound on the minimum is 1 goes 2 7 mineT 2 α01τ 2 1 erfc(T1) + (180) √
1706.02515#177
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
178
A lower bound on the minimum is 1 goes 2 7 mineT 2 α01τ 2 1 erfc(T1) + (180) √ (−1)(α01 − 1)µ2 ν5/2 τmin min maxω2 max √ + τmin(α01 + α01t11 − 1) ν3/2 max − α01τ 3/2 max √ νmin = − 0.179318 . Thus, an upper bound on the maximal absolute value is 5 3/2 dy rets v2 (=1)(a01 — Dpvin’min Q01T nin t (181) 8 7 Uidese/ Tomas Vmax 1 8 √ τmax(α01 + α01T11 − 1) ν3/2 min maxet2 + α01τ 2 1 erfc(t1) = 0.21232788238624354 . ∂J12 ∂τ
1706.02515#178
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
179
∂J12 ∂τ We use Lemma B4]to obtain an upper bound on the maximum of the expression of the lemma: 2 (0.1? -0.1?(-1)(a01 - 1) (0.1-0.1)a01 — a1 + 1 = | ——___+ 9 — V0.8 - 0.8001 4 —1.72296 Viz ( (0.8 -0.8)372 eo V08-0.8 ) 2 (0.1? -0.1?(-1)(a01 - 1) (0.1-0.1)a01 — a1 + 1 = | ——___+ 9 — V0.8 - 0.8001 4 —1.72296 . Viz ( (0.8 -0.8)372 eo V08-0.8 ) (182) We use Lemma[34]|to obtain an lower bound on the minimum of the expression of the lemma: [2 (eo =) Vid 15a: 4 (—0.1 -0.1)ao1 — a1 + *) Tv (1.5 - 1.25)8/2 V1.5 - 1.25
1706.02515#179
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
180
[2 (eo =) Vid 15a: 4 (—0.1 -0.1)ao1 — a1 + *) 9.2302. Tv (1.5 - 1.25)8/2 V1.5 - 1.25 (183) wwtur)? Next we apply Lamia the expression vTe oor erfe (4 V5 2). We use Lemma] to obtain an upper bound on the maximum of this expression: Next we apply Lamia the expression vTe oor erfe to obtain an upper bound on the maximum of this expression: (4.5-1.25—0.1.0.1)2 1.5-1.25—-0.1-0.1 (1.5·1.25−0.1·0.1)2 2·1.5·1.25 √ √ 1.5 · 1.25e α01 erfc 2 1.5 · 1.25 = 1.37381 . (184) 47 We use Lemma 37 to obtain an lower bound on the minimum of this expression:
1706.02515#180
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
181
47 We use Lemma 37 to obtain an lower bound on the minimum of this expression: (0.8-0.8+0.1-0.1)? . {(9.8-0.8+0.1-0.1 0.8- 0.867 20503 ~~ ao; erfe | ————————_ } = 0.620462. 185 on ( V2V08-08 ) 89) wwtuT)? " Next we apply Lemma]23}for dae “a erfe (444). An upper bound on this expres- sion is (0.8-0.8—0.1-0.1)? 26 SIO ert ( 0.8 — 0.1-0.1 ao) = 1.96664. (186) A lower bound on this expression is 1.5-1.25+0.1-0.1 2e (1.5·1.25+0.1·0.1)2 2·1.5·1.25 α01 erfc √ √ 2 1.5 · 1.25 = 1.4556 . (187)
1706.02515#181
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
182
The sum of the minimal values of the terms is −2.23019+0.62046+1.45560 = −0.154133. The sum of the maximal values of the terms is −1.72295 + 1.37380 + 1.96664 = 1.61749. Thus, an upper bound on the maximal absolute value is 1 (+722)? (ty, + Tho =o e4 (corte 272 erfc Ce 8 V2/To2 1 8 (t11+T22 )2 2T22 λ01et4 √ α01T22e erfc + (188) 2 -)T, erfc(t,) 4 v2 _ (G01 -1) T ty) = 0.2124377655377270 . (α01 − 1)T 2 11 t3/2 22 2α01et2 √ 1 erfc(t1) + − + −α01 + α01T11 + 1 t22 √ − 1 Vt22)) ∂J21 ∂µ An upper bound on the maximum is N31 Wrvax a2yelt (-e7"™) erfe(T)) + 202, ele! erfc(t2) — erfe(T3) + 2) = (189) 0.0222044 .
1706.02515#182
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
183
A upper bound on the absolute minimum is do Wrrax azyelt (—e~"*) erfe(t,) + 203 do Wrrax azyelt (—e~"*) erfe(t,) + 203 eT? eM erfc(T2) — erfe(t3) + 2) = (190) 0.00894889 . Thus, an upper bound on the maximal absolute value is azyett (-e7™) erfe(Ty) + 202, et2 el erfc(t2) — erfe(T3) + 2) = (91) 0.02220441024325437 . ∂J21 ∂ω An upper bound on the maximum is 01(2T11 + 1)et2 α2 λ2 01 (192) etent erfc(t2) + 2T(2 — erfe(T3)) + 2 (-e7™) erfe(T)) + vFae") = 2 a, (tu + ett (-e7™) erfe(T)) + vFae") = 1.14696. A lower bound on the minimum is
1706.02515#183
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
184
2 a, (tu + ett (-e7™) erfe(T)) + vFae") = 1.14696. A lower bound on the minimum is rn (0: (Tn +1)e% (-e~"*) erfe(t1) + (193) a, (Qt + Det eT erfe(T2) + 2t11(2 — erfe(T3))+ 48 2 |v") = —0.359403 . Thus, an upper bound on the maximal absolute value is 01(2T11 + 1)et2 α2 λ2 01 (194) ete erfe(tg) + 2711 (2 — erfc(T3)) + 2 (—e7™) erfe(T1) + (ivr) = 2 as (ti + Le™ (—e7™) erfe(T1) + (ivr) = 1.146955401845684 . ∂J21 ∂ν An upper bound on the maximum is v2) (i - 1) _ 2 e 2 c 501 Tmax max "1 ay (-e"*) erfe(Ty) + 409, e% erfe(t,) 4 VTn 1 2 0.149834 . A lower bound on the minimum is
1706.02515#184
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
185
1 2 0.149834 . A lower bound on the minimum is 1 V/2(-1) (a8: - 1) 2 2. 501 Tmaxtmaxe 4 a6) (-e!?) erfc(t) + 402,e erfc(To) 4 Vto2 − 0.0351035 . Thus, an upper bound on the maximal absolute value is V/2(-1) (a8: - 1) 1 . . 501 Tmaxtmaxe 4 a, (-e"*) erfe(T)) + 4a? e!2 erfc(t2) 4 Vin 0.14983446469110305 . ∂J21 ∂τ An upper bound on the maximum is 1 /2(-1) (a8: - 1) 501 maxtmaxe" a64 (-e7') erfe(T1) + 4a®,e! erfc(t2) 4 * 0.179801 . A lower bound on the minimum is 1 V2(-1) (61 — 1) 501M maxtmaxe a6) (-e"’) erfc(t,) + 4a2,et2 erfc(T2) + 4 1 2 − 0.0421242 . Thus, an upper bound on the maximal absolute value is
1706.02515#185
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
186
1 2 − 0.0421242 . Thus, an upper bound on the maximal absolute value is 1 V2(-1) (261 ~ 1) 501M maxtmaxe aa, (-e"’) erfe(T1) + 4a®,e! erfc(t2) 4 a 0.17980135762932363 . 49 (195) (196) (197) (198) (199) (200) ∂J22 ∂µ We use the fact that ∂J22 ∂µ = ∂J21 α2 01 Jax | Thus, an upper bound on the maximal absolute value is 2 2 ; ; V2(-1) (a8, — 1) We use the fact that Fae = Jax | Thus, an upper bound on the maximal absolute value is 2 2 2 ; ; V2(-1) (a8, — 1) p01 TinaxWinaxe aay (-e7*) erfe(T)) + 4a? e!2 erfc(t2) 4 Vin 1 2 0.14983446469110305 . ∂J22 ∂ω An upper bound on the maximum is
1706.02515#186
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
187
1 2 0.14983446469110305 . ∂J22 ∂ω An upper bound on the maximum is 2 2 1 P 2-1) (a8, _ 1) 5 0iMmaxTmax® 4 apy (-e"') erfe(T,) + 4a2, el erfc(t2) 4 VTh2 0.149834 . A lower bound on the minimum is 2 2 1 _ : ; 2-1) (1 — 1) 501 HmaxTimaxe 4 [a2 (-e'’) erfe(t1) + 4ae,et? erfe(T2) 4 Vin − 0.0351035 . Thus, an upper bound on the maximal absolute value is 2 1 v2 5 0iMmaxTinax® apy (-e') erfe(T)) + 402, e2 erfc(tz) +4 0.14983446469110305 . ∂J22 ∂ν 21) ww We apply Lemma}35]to the expression 2 (Sts - 3). Using Lemna an upper bound on the maximum is upper bound on the maximum is 1 DrorTinax® 2 ((a1-1Tu y2 (6 2) us TEL?
1706.02515#187
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
188
upper bound on the maximum is 1 DrorTinax® 2 ((a1-1Tu y2 (6 2) us TEL? 1 5 DrorTinax® (a3, (-e7') erfe(T)) + 802, e!2 erfc(t2) + (205) 2 ((a1-1Tu 308 y2 (6 2) 7a _ Soi 1.19441. us TEL? VT22 Using Lemma 35, a lower bound on the minimum is 1 5 5 DrorTinax® (a3, (-e’) erfe(t) + 8a2,e7? erfc(T2) + (206) 2 ((a1-1)tu — 3a3 v2 (‘“ =) 1 a )) = -1.80574. u t95 V 022 Thus, an upper bound on the maximal absolute value is - PrTRawe Ga (-e"’) erfe(t1) + 802,62 erfce(T2) + (207) /2 (ag, - 1) ti = 302, 5 — = 1.805740052651535 . 7 ( Be Vi22 50 (201) (202) (203) (204) (207) ∂J22 ∂τ
1706.02515#188
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
189
50 (201) (202) (203) (204) (207) ∂J22 ∂τ Or : FZ ( (e?=1)nw 4 9 We apply Lemma)|36jto the expression V2 Te 3a°VJUT }. pwtur)? We apply Lemma|37|to the expression ute “ses erfe (424) . We apply Lemma|38}to . (Hw t2u7)? 2 the expression vre~ 2-7 _ erfe ( #24247 ) | P V2VuT We combine the results of these lemmata to obtain an upper bound on the maximum: 1,. (Mitte)? ('T; tos —d2, (-editmee “Fiaa ene ( ut =) + (208) 4 V2Vin . 4, Gart2722)? (ty +279 8ap,;Tose e277 erfe a — V2VTx 2027 e~™ erfe(T,) + 402, ¢!2e~" erfc(t) + 2(2 — erfc(T3)) + 2 on, (ad - 1) Ti 2 wa yf ae ( OL _ 302, Vi = 2.39669 . 7 ( Vin We combine the results of these lemmata to obtain an lower bound on the minimum:
1706.02515#189
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
191
Thus, an upper bound on the maximal absolute value is 1 ‘ Gitta? (Ty +t pn (-editmete “Waa ane ( ut 2) + (210) V2\/ta2 (142729)? f ( + =) e 22 erfc | —~—— } — V2VTho 2a2,eT eT erfc(T,) + dad ele erfe(tz) + 2(2 — erfe(T3)) + 2-7, ( (oi -1) Ti 2 anne 7 =e — t. = 2.36 907216327 . / =e ( = 3091 Vio2 396685907216327 2 -t 8a91To2e Ny Lemma 40 (Derivatives of the Mapping). We assume α = α01 and λ = λ01. We restrict the range of the variables to the domain µ ∈ [−0.1, 0.1], ω ∈ [−0.1, 0.1], ν ∈ [0.8, 1.5], and τ ∈ [0.8, 1.25]. The derivative ∂ ∂µ ˜µ(µ, ω, ν, τ, λ, α) has the sign of ω.
1706.02515#191
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]