doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1706.02515
192
The derivative ∂ ∂µ ˜µ(µ, ω, ν, τ, λ, α) has the sign of ω. The derivative ∂ ∂ν ˜µ(µ, ω, ν, τ, λ, α) is positive. The derivative ∂ ∂µ ˜ξ(µ, ω, ν, τ, λ, α) has the sign of ω. The derivative ∂ ∂ν ˜ξ(µ, ω, ν, τ, λ, α) is positive. # Proof. ∂ ∂µ ˜µ(µ, ω, ν, τ, λ, α) (2 − erfc(x) > 0 according to Lemma 21 and ex2 to Lemma 23. Consequently, has ∂ erfc(x) is also larger than zero according ∂µ ˜µ(µ, ω, ν, τ, λ, α) the sign of ω. 51
1706.02515#192
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
193
51 ∂ν ˜µ(µ, ω, ν, τ, λ, α) Lemma 23 says ex2 erfc(x) is decreasing in µω+ντ ντ in ντ since it is proportional to minus one over the squared root of ντ . we . (negative) is increasing ‘ jyo Mwtyr _ 1.5-1.2540.1-0.1 c We obtain a lower bound by setting Je > CAV for the e®” erfc(a) -5-1.2540.1-0.1 1 term. The term in brackets is larger than of V2V15-1.25 ) Qo1 erfc (25st) - V2V1.5-1.25 term. The term in brackets is larger than masa — 1) = 0.056 2 π0.8·0.8 (α01 − 1) = 0.056 Consequently, the function is larger than zero. 4/ ˜ξ(µ, ω, ν, τ, λ, α) ∂ ∂µ We consider the sub-function ry w+ur\? (ave —a? (a) # ry ug
1706.02515#193
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
194
∂ ∂µ We consider the sub-function ry w+ur\? (ave —a? (a) # ry ug ry w+ur\? w+2ur\? (ave —a? (a) erfc (‘*) — e( aH) erfc (“*)) : ug VUT We set x = v7 and y = jw and obtain v2 JE- 2 (« (oR) ene (EH) ~ GRAY ete (4) ) . (212) The derivative of this sub-function with respect to y is fz) (213) a? (a (2a + y) erfe (234) ~ (a + y) erfe (+= x @etuy? aa J20rya(* (ety) erfe( 2) _ ee ceo =) xr The inequality follows from Lemma 24, which states that zez2 increasing in z. Therefore the sub-function is increasing in y. erfc(z) is monotonically The derivative of this sub-function with respect to x is √
1706.02515#194
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
196
Jama? (Sa (42? — y?) erfe (24) ee (x — y)(a + y) erfc (+4)) — V2xr°/? a Q/rx? (215) (2x—y)(2a+y)2 (x=y)(x@+y)2 — V2x3/2 (a2 —1 ve(s age / (CG) ()') vi atu ( ety er ( ) Vive vive) te Q/mx? 2 ( (2x-y)(2x+y)2(V2Vz) (@y)(e+y)2(v2Vz) 3/2 (2 - — V2x -1 vra (4 Vi (Getut/@rtutie) Ji(etyt tue) v200/? (a? —1) 2 /nx? 2 (2a—y)(2e+y)2 (ey) (@+y)2 _ 24 via (x (irtur /@r+y) as) atts) #(o*~1) > V2/n03/? # Jama? (2x+y)2 2x V2xr°/? (a? — 1) = 52 . > =
1706.02515#196
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
197
a2 D (2%—y)(2a+y)2 (wy) (@+y)2 _¢ Va (2e+u+V/ Gatun) 242(2u+y)+1) Vi (tut (-+y)?+0.782-2(x+y)+0.782? ) V2) nx3/2 0° ( (2a~y)(2x+y)2 (wy) (x+y)2 ) x(a? 1) D vi( (ety t/Qx+y+) 7) ~ Vit ebut/@tut0.782)") VaVra3/? (Qa—y)(2e+y)2 (a-y)(e+y)2 ) _ (c? _ 1) VaQQxty) tl) Vr@le+y)-+0.782)) V2 /r03/2 (2(e@+y)+0. Sess y)(Qa+y)2 _ — ovtenner sy) 12) a2 D 7 (2(2a + y) + aT 2a + y) + 0.782) V2/723/2 Vo? (a (a? = 1) (2(2% + y) + 1)(2(a + y) + 0.782) (2(20 +
1706.02515#197
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
198
Vo? (a (a? = 1) (2(2% + y) + 1)(2(a + y) + 0.782) (2(20 + y) + 1)(2(@ + y) + 0.782) V2 /r03/? 8x3 + (12y + 2.68657)x? + (y(4y — 6.41452) — 1.40745) a + 1.22072y? (2(2a + y) + 1)(2(a + y) + 0.782) /2,./ra3/? 8x3 + (2.68657 — 120.01)x? + (0.01(—6.41452 — 40.01) — 1.40745)x + 1.22072(0.0)? (2(20 + y) + 1)(2(@ + y) + 0.782) V2 /r03/? 8x? + 2.56657x — 1.472 (2(2e + y) + 1)(2(a + y) + 0.782) V2Va/e 8x? + 2.56657x — 1.472 (2(2ax + y) +.1)(2(@ + y) + 0.782) V2VaVr 8(a + 0.618374) (a —
1706.02515#198
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
200
_¢ (a2 _ 1) = – First inequality: We applied Lemma 22 two times. √ 2 – Equalities factor out – Second inequality part 1: we applied √ 0 < 2y =⇒ (2x + y)2 + 4x + 1 < (2x + y)2 + 2(2x + y) + 1 = (2x + y + 1)2 . (216) Second inequality part 2: we show that for a = =n (\ / 204841697 — 13) following holds: 8* — (a? + 2a(a + y)) > 0. We have 2 8* — (a? + 2a(x +y)) = 8-2a>0 Da 7 and a 82 — (a? + 2a(a + y)) = —2a > 0. Therefore the minimum is at border for minimal x and maximal y: holds: 8x and ∂ 8x ∂y minimal x and maximal y: 2 8 - 0.64 2 / 2048 + 1697 1 / 2048 + 1697 —1 -64 + 0.01) 4 —1e T Al T 7 (0.6 0.01) (3( T *)) (217) Thus # Se Tv Se > a? +2a(x+y). (218) Tv # (/2setoez 13)
1706.02515#200
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
201
Thus # Se Tv Se > a? +2a(x+y). (218) Tv # (/2setoez 13) # for a = 1 20 − 13 > 0.782. # π – Equalities only solve square root and factor out the resulting terms (2(2x + y) + 1) and (2(x + y) + 0.782). – We set α = α01 and multiplied out. Thereafter we also factored out x in the numerator. Finally a quadratic equations was solved. 53 = = 0 . The sub-function has its minimal value for minimal « and minimal y x = vt = 0.8 - 0.8 = 0.64 and y = pw = —0.1- 0.1 = —0.01. We further minimize the function rw . pw 9.017, 0.01 uwe 27 (2 —erfc > —0.01e 20-64 | 2 — erfe | —— . (219 pet (rete (oe) (?-«« (Gam) °» ˜ξ(µ, ω, ν, τ, λ, α): We compute the minimum of the term in brackets of ∂ ∂µ We compute the minimum of the term in brackets of ZEu, W,V,T, A, a):
1706.02515#201
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
202
We compute the minimum of the term in brackets of ∂ ∂µ We compute the minimum of the term in brackets of ZEu, W,V,T, A, a): p22 pow uwe 2¥* {2 —erfc + 220 ( ( V2/ =)) oo) # µωe (“*))) + evr V2Vut 7 )? 2-0.64 —0.01 ) erfe C2") V2/0.64 a2 (- (CB) ene (‘“*) — (REY cote (“*))) + evr > 01 , Va Jor V2Vut 7 (‘“*) — Va Jor 0.64 — 0.01 rfc coe +) V2V0.64 0.01 ; ~0. 0.64 — 0.01 20.64—0.01 )? 2-0.64 —0.01 a2 (- (C8) rfc coe +) - el Viv0-64 ) erfe C2") _ o V2V0.64 V2/0.64 2 ™ 0.01 # Tony)? √ # (saa)) | ( 0.012 20.64 √ # 2 − erfc = 0.0923765 . + 0.64 0.01e 0.64
1706.02515#202
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
204
We obtain a chain of inequalities: 2ety \? Qn arty \? r ae( 24) ane ( at) _ (a) arte (4) > (222) ViVi ViVi 2-2 2 2 2 Qe+1 a+ + at 4 “lie (24) +2) ve (a+ (#4)'+4) eto tietinty ~ V(aty)?+ =) Vi 1 vive ( ap ea +1+42a+y SRT) Vi 2V2V/z (5 sass ~ Wary Tw. sao! (2V2V/z) (2( ee + ‘) + 0.782) — (2(2a + y) + 1)) Vi((2(a + y) + 0.782)(2(2x + y) + 1)) (2V2V2) (2y + 0.782 - 2-1) Vi ((2(a + y) + 0.782)(2(22 + y) + 1)) > 0. 2ety 24) \? (222) We explain this chain of inequalities: – First inequality: We applied Lemma 22 two times. √ 2 – Equalities factor out – Second inequality part 1: we applied √ # x and reformulate.
1706.02515#204
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
205
– First inequality: We applied Lemma 22 two times. √ 2 – Equalities factor out – Second inequality part 1: we applied √ # x and reformulate. 0 < 2y =⇒ (2x + y)2 + 4x + 1 < (2x + y)2 + 2(2x + y) + 1 = (2x + y + 1)2 . (223) 54 — Second inequality part 2: we show that for a = 30 (\ / 204841697 — 13) following holds: 8* — (a? + 2a(a + y)) > 0. We have 2 8* — (a? + 2a(x +y)) = 8-2a>0 and i 82 _ (a? + 2a(x + y)) = —2a < 0. Therefore the minimum is at border for minimal x and maximal y: holds: 8x and ∂ 8x ∂y minimal x and maximal y: 2 0.64 2 / 2048 + 1697 1 / 2048 + 1697 -ik -64 + 0.01) 4 —1e - Al 7 2) 06 0.01) (3( 7 *)) (224) 8 · 0.64 π Thus # 8x — Tv 8x — > a? 4+2a(a+y). (225) Tv
1706.02515#205
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
206
8 · 0.64 π Thus # 8x — Tv 8x — > a? 4+2a(a+y). (225) Tv for a = 1 20 π − 13 > 0.782. – Equalities only solve square root and factor out the resulting terms (2(2x + y) + 1) and (2(x + y) + 0.782). We know that (2 − erfc(x) > 0 according to Lemma 21. For the sub-term we derived ae( AF) erfe (4) _ (He) erfc (<4) >0. (226) Consequently, both terms in the brackets of ∂ ∂ν ˜ξ(µ, ω, ν, τ, λ, α) is larger than zero. fore ∂ ∂ν ˜ξ(µ, ω, ν, τ, λ, α) are larger than zero. ThereLemma 41 (Mean at low variance). The mapping of the mean ˜µ (Eq. (4)) ji(u,w,v,7,,a) = > (tc + juw) erfe (=): (227) acl’t © erfe Cac) + [iver + 2)
1706.02515#206
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
207
acl’t © erfe Cac) + [iver + 2) in the domain —0.1 < —0.1, -0.1 <w < —0.1, and 0.02 < vt < 0.5 is bounded by # hhnann |˜µ(µ, ω, ν, τ, λ01, α01)| < 0.289324 (228) and lim ν→0 |˜µ(µ, ω, ν, τ, λ01, α01)| = λµω. (229) We can consider ˜µ with given µω as a function in x = ντ . We show the graph of this function at the maximal µω = 0.01 in the interval x ∈ [0, 1] in Figure A6. Proof. Since ju is strictly monotonically increasing with puw ji(qe,w,v,7, 4,0) < fu(0.1, 0.1, 0,7, A, a) < (230)
1706.02515#207
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
208
(230) 1 . { 0.01 o.014Yt (on) v2 _ 0.012 =A | —(a@ + 0.01) erfe +ae™ 2 erfe | ———— ]} + 4/—Vvte @ +2-0.01] 2 ( ( ) (34) V2 vt T { 0.01 +ae™ (34) 0.02 + 0.01 erfe Ca) yas (on) | ———— ]} + V2 vt 0.01 0.01) erfe (a0 FV ere \ a gas 1 0.05 0.02 + 0.01 0.01 _0.012 =X e 2 +9 la erfe Ca) — (ao1 + 0.01) erfe (a0 Jee 20.5 an 0.01 -2 a ( mente yas ) (om FV ere \ a gas voy < 0.21857, < 0.21857, < ~ where we have used the monotonicity of the terms in ντ . 55 = 0 .
1706.02515#208
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
209
Figure A6: The graph of function ˜µ for low variances x = ντ for µω = 0.01, where x ∈ [0, 3], is displayed in yellow. Lower and upper bounds based on the Abramowitz bounds (Lemma 22) are displayed in green and blue, respectively. Similarly, we can use the monotonicity of the terms in ντ to show that fi(u,w,v,7,A, a) > fi(0.1, —0.1,v,7, A, a) > —0.289324, (231) such that |˜µ| < 0.289324 at low variances. Furthermore, when (ντ ) → 0, the terms with the arguments of the complementary error functions erfc and the exponential function go to infinity, therefore these three terms converge to zero. Hence, the remaining terms are only 2µω 1
1706.02515#209
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
210
Lemma 42 (Bounds on derivatives of ji in Q~). The derivatives of the function ji(f, w,V,7, 01, 001 (Eq. (4) with respect to [4,w,v,T in the domain Q~ = {p1,w,v,T| —0.1< pw <0.1,-0.1<w< 0.1,0.05 < v < 0.24,0.8 < r < 1.25} can be bounded as follows: # ft} ∂ ∂µ ∂ ∂ω ∂ ∂ν ∂ ∂τ < 0.14 (232) < 0.14 < 0.52 < 0.11. Proof. The expression 0. 1 = (uw)? (uw)? (On plus ) (uwter)? (“ + “)) =f = Ji = sAwe 2 | Qe 2 — e 2 erfe + ae” 27 ~ erfc | ———— dp an) ( (A V2/0T (233)
1706.02515#210
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
211
contains the terms e which are monotonically de- creasing in their arguments (Lemma 23). We can therefore obtain their minima and maximal at the minimal and maximal arguments. Since the first term has a negative sign in the expression, both terms reach their maximal value at µω = −0.01, ν = 0.05, and τ = 0.8. (a) 1 ; rl < 5 |Al | (2 = 620989 erfe (0.0853553) + ae erfe (0.106066) ) | < 0.133 (a Since, ˜µ is symmetric in µ and ω, these bounds also hold for the derivate to ω. 56 (234) h(x) 0.005 0.004 0.003 0.002 0.001 0.009 x 0.0 0.2 0.4 0.6 0.8 1.0 Figure A7: The graph of the function h(x) = ˜µ2(0.1, −0.1, x, 1, λ01, α01) is displayed. It has a local maximum at x = ντ ≈ 0.187342 and h(x) ≈ 0.00451457 in the domain x ∈ [0, 1].
1706.02515#211
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
212
We use the argumentation that the term with the error function is monotonically decreasing (Lemma 23) again for the expression (a) —fa=SJ2= 235 apt Si2 (235) 1 — 20? (uwter)? ffl + VT 2 = =) e = [ae fo ( |) (a 1),/ | < pre? (~ zi erte (4 “) (a-1) a )< 1 ier (|1.1072 — 2.68593]) < 0.52. wtvr)? We have used that the term 1.1072 < age” a7 erfe (4g) < 1.49042 and the term 0.942286 < (a — Wes < 2.68593. Since ji is symmetric in v and 7, we only have to chance outermost term | +Ar| to | Av to obtain the estimate | 2 ji| < 0.11. Lemma 43 (Tight bound on ˜µ2 in Ω−). The function ˜µ2(µ, ω, ν, τ, λ01, α01) (Eq. (4)) is bounded by |f?| |f?| < 0.005 (236)
1706.02515#212
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
213
|f?| |f?| < 0.005 (236) in the domain Q~ = {p1,w,v,7| —O1<w<0.1,-0.1 <w <0.1,0.05 <v <0.24,08<7< 1.25}. We visualize the function ˜µ2 at its maximal µν = −0.01 and for x = ντ in the form h(x) = ˜µ2(0.1, −0.1, x, 1, λ01, α01) in Figure A7. Proof. We use a similar strategy to the one we have used to show the bound on the singular value (Lemmata 10, 11, and 12), where we evaluted the function on a grid and used bounds on the derivatives together with the mean value theorem. Here we have |? (1, w, v,T,Ao1, 01) — fi? (u + Ap, w + Aw,y + Av,7 4 Ar, Ao1,.001)| < (238) 0 .5 O Oo. ji?| |Ap| 4 ji} |Aw| 4 ji?| |Av| 4 ji?| |Ar. [Sa ised + | 27] kl + || fan + | a
1706.02515#213
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
214
We use Lemma 42 and Lemma 41, to obtain ~ j2| = 2 i|| ja] < 2- 0.289324 - 0.14 = 0.08101072 (239) Ou Ou O 5 .,| oO. . =f] = 2|a||— fi] < 2- 0.289324 - 0.14 = 0.08101072 Ow Ow 57 (235) (237) o — ji? = 2|f|| =f] < 2- 0.289324 - 0.52 = 0.30089696 Ov al (a) Fr ~ ~ 4 Lac = 2|ja|| fil < 2 - 0.289324 - 0.11 = 0.06365128 ar" Or We evaluated the function ˜µ2 in a grid G of Ω− with ∆µ = 0.001498041, ∆ω = 0.001498041, ∆ν = 0.0004033190, and ∆τ = 0.0019065994 using a computer and obtained the maximal value maxG(˜µ)2 = 0.00451457, therefore the maximal value of ˜µ2 is bounded by
1706.02515#214
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
215
max (µ,ω,ν,τ )∈Ω− 0.00451457 + 0.001498041 · 0.08101072 + 0.001498041 · 0.08101072+ 0.0004033190 · 0.30089696 + 0.0019065994 · 0.06365128 < 0.005. Furthermore we used error propagation to estimate the numerical error on the function evaluation. Using the error propagation rules derived in Subsection[A3.4.5] we found that the numerical error is smaller than 10~13 in the worst case. Lemma 44 (Main subfunction). For 1.2 <x < 20and-0.1<y<0.1, the function (ety? , faty (22+)? 2e+y e 2 erfc 2e~ 2 — erfc (= 4) (242) (= 2/x L) - 2x is smaller than zero, is strictly monotonically increasing in x, and strictly monotonically decreasing in y for the minimal x = 12/10 = 1.2. Proof. We first consider the derivative of sub-function Eq. (101) with respect to x. The derivative of the function
1706.02515#215
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
216
Proof. We first consider the derivative of sub-function Eq. (101) with respect to x. The derivative of the function tiv? , faty \- @zpy? (= + 7) e 2 erfe 2e~ 2 — erfc (243) (= 2x v2Ve with respect to x is √ vi (es (ty uu) ) + V3Vz(30 — y) *(« —y)(a+y) erfe (424) 2 (4a? — y?) erfe (= 2/mx? (244) vi (e a (a — y)(@ + y) erfc (4 al = (0 + y)(2% — y) erfe (2+) + V2\/z(3x — a ty)? (ety)? e (a— w)(ety) erfe( S42) 2c (2a+y)(2x—y) erfe( 2a) - ve Vivi Vivi + (Bx —y) 22 /ra2 Jt We consider the numerator (tw)? (224 ~ (244 Vi ee (7 wa + wert (FH) 20 om (20 + y)( Qa — y)erte (2542) - (80-4) Vv2vE VaVE ne (245) √ For bounding this value, we use the approximation
1706.02515#216
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
217
√ For bounding this value, we use the approximation ez2 erfc(z) ≈ √ 2.911 √ π(2.911 − 1)z + πz2 + 2.9112 . (246) 58 240 (240) (241) = x(3x − y) (245) = from Ren and MacKenzie [30]. We start with an error analysis of this approximation. According to Ren and MacKenzie [30] (Figure 1), the approximation error is positive in the range [0.7, 3.2]. This range contains all possible arguments of erfc that we consider. Numerically we maximized and minimized the approximation error of the whole expression oe (a — y)(@ + y) erfc (4%) | 2 (2x — y)(2x + y) erfc Vive V2Ve 2.911(x — y)(a@+y) (V2Vz) (senguen + fo (=)" 201) 2-2.911(2x — y) (2x + y) (V2Vz) (sesygyes n (2eun)” zon) E(x, y) = (23) √ 2 # x − (247)
1706.02515#217
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
218
E(x, y) = (23) √ 2 # x − (247) We numerically determined 0.0113556 < E(x,y) < 0.0169551 for 1.2 < x < 20 and -0.1 < y <0.1. We used different numerical optimization techniques like gradient t based constraint BFGS algorithms and non-gradient-based Nelder-Mead methods with different start points. Therefore our approximation is smaller than the function that we approximate. We subtract an additional safety gap of 0.0131259 from our approximation to ensure that the inequality via the approximation holds true. With this safety gap the inequality would hold true even for negative x, where the approximation error becomes negative and the safety gap would compensate. Of course, the safety gap of 0.0131259 is not necessary for our analysis but may help or future investigations. We have the sequences of inequalities using the approximation of Ren and MacKenzie [30]: (x+y)2 2x (2x+y)2 2x
1706.02515#218
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
219
We have the sequences of inequalities using the approximation of Ren and MacKenzie [30]: (x+y)2 2x (2x+y)2 2x ; . aoe (x — y)(x + y) erfc (=) 2¢ (2a — y)(2% + y) erfe (24) (3a — y) 4 Viva — Vive (30 —y) 4 2.911(a — y)(a@+y) _ (\ (se) + 2.9112 4 sues (v2Vz) 2(2x — y)(2a + y)2.911 Vm —0.0131259 = 2 (30 —y) 4 (V2V/#2.911) (w — y)(w« +y) _ ( ma + y)? +2- 2.91122 + (2.911 — 1)(a + wv7) (v2V/z) 2(2x — y) (2a + y) (V2Vr2.911) Jy — 00131259 = (V2Vz) ( w(Qa + y)? +2- 2.91 12x + (2.911 — 1)(2a + vv) # e # (x − y)(x + y) erfc # 2e √ √ # x (248) √
1706.02515#219
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
221
(3x —y) +2.911 (w= w(@ +9) (2.911 — La ty) + (ety)? + 225s 7 2(2x — y)(2x + y) (2.911 —1)(22 +-y) 4 (ee + y)? + 220122 T — 0.0131259 > (3a — y) + 2.911 (w= y)(e+y) (2.911 —1)(~+y)4 Jes) + (x+y)? 4 22.01)? » 2:2.9112y T 2(2x — y)(2x + y) (2.911 —1)(22 +-y) 4 (Qe + y)? + 220lPe T — 0.0131259 = (3a — y) + 2.911 (@= (e+) - (2.911-D(ety)t+ Vf (@ty+ 2.911? y? 2(2x — y)(2x + y) (2.911 —1)(22 +-y) 4 Ver + y)? + 220172 — 0.0131259 = (3a — y) + 2.911 (e-wety) 2(2x — y)(2a + y) 0.0131259 2.911 (x + y) + 29
1706.02515#221
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
222
— y) + 2.911 (e-wety) 2(2x — y)(2a + y) 0.0131259 2.911 (x + y) + 29 (2.911 —1)(22 + y) + \/ (2x + y)? + 220s . x—yj(aty 2(2a — y)(2a + y)2.911 (3a — y) 4 eos ry) _ M ——— - 0.0131259 = TTY ST (2.911 —1)(2r +y) + y/ (2a + y)? + 22212 2.911 (222-9 2.911 ( y+ 22 ) eer ty) 4 T 2.911 2- 2.91122 me) (3a — y — 0.0131259) (em (Qe+y)+4/Qe+y)?24 :) : (x —y)(a +y) (em (Qr+y)+4/ Qr+y)? 4 2uire)) TT - “1 (Gc y+) (em 1)(22 + y) + yf (2x + y)? azure) = (( (x — y)(« + y) + (3x — y — 0.0131259)(x + y + 0.9266)) (\/Qx + y)? +
1706.02515#222
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
224
− 0.0131259 = (249) 5.822(2a — y)(x + y + 0.9266) (2a + y)) > -1 ((« ty) + 2) (em 1)(2a + y) + 4/ (2a + y)? zars)) > 0. We explain this sequence of inequalities: • First inequality: The approximation of Ren and MacKenzie [30] and then subtracting a safety gap (which would not be necessary for the current analysis). √ √ 2 • Equalities: The factor x is factored out and canceled. • Second inequality: adds a positive term in the first root to obtain a binomial form. The term containing the root is positive and the root is in the denominator, therefore the whole term becomes smaller. 60 − (3x − y) + (x − y)(x + y) (x + y) + 2.911 # π − 2(2x − y)(2x + y)2.911 (2.911 − 1)(2x + y) + (2x + y)2 + 2·2.9112x # π − 0.0131259 = • Equalities: solve for the term and factor out.
1706.02515#224
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
225
(2x + y)2 + 2·2.9112x # π − 0.0131259 = • Equalities: solve for the term and factor out. )2 L 2-2.9112a t Bringing all terms to the denominator (( + y) + 2-244) (ou -1Qr+y)+VQrt+y T ). Equalities: Multiplying out and expanding terms. • Last inequality > 0 is proofed in the following sequence of inequalities. We look at the numerator of the last expression of Eq. (248), which we show to be positive in order to show > 0 in Eq. (248). The numerator is (Vee + y)? + 5.39467x + 3.8222 4 Lolly)
1706.02515#225
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
226
((x — y)(a@ + y) + (8a — y — 0.0131259)(a + y + 0.9266)) (Vee + y)? + 5.39467x + 3.8222 4 5.822(2x — y)(x + y + 0.9266) (2% + y) = — 5.822(2x — y)(a + y + 0.9266) (2a + y) + (3.822% + 1.911y)((a — y)(a + y)+ (3a — y — 0.0131259)(a + y + 0.9266)) + ((% — y)(a+y)+4 (3a — y — 0.0131259)(a + y + 0.9266))/ Qa + y)? + 5.394672 = — 8.023 + (4a? + 2xy + 2.76667x — 2y? — 0.939726y — 0.0121625) \/(2x + y)? 4 (250) + 5.39467x— 8.0x?y — 11.0044? + 2.0ry? + 1.69548ary — 0.0464849x + 2.0y? + 3.59885y7 — 0.0232425y = —
1706.02515#226
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
228
The factor in front of the root is positive. If the term, that does not contain the root, was positive, then the whole expression would be positive and we would have proofed that the numerator is positive. Therefore we consider the case that the term, that does not contain the root, is negative. The term that contains the root must be larger than the other term in absolute values. — (-8.02° — 8.02°y — 11.0044x? + 2.cy? + 1.69548ay — 0.0464849x + 2.1% + 3.59885y" — 0.0232425y) < — (-8.02° — 8.02°y — 11.0044x? + 2.cy? + 1.69548ay — 0.0464849x + 2.1% + 3.59885y" — 0.0232425y) < (251) (251)
1706.02515#228
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
229
(251) (4a? + 2ay + 2.76667x — 2y? — 0.939726y — 0.0121625) \/(2x + y)? + 5.39467x Therefore the squares of the root term have to be larger than the square of the other term to show > 0 in Eq. (248). Thus, we have the inequality: (—8.02 — 8.02?y — 11.0044a? + 2.ay? + 1.69548xy — 0.04648492a + 2.y? + 3.59885y? — 0.0232425y)” (252) (4x? + 2ny + 2.766672 — 2y? — 0.939726y — 0.0121625)” (2x + y)? +5.394672) . This is equivalent to 0 < (4a? + 2ay + 2.76667" — 2y? — 0.939726y — 0.0121625)” ((2a + y)? +.5.394672) — (253)
1706.02515#229
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
230
(253) (—8.02° — 8.02?y — 11.0044a? + 2.0ay? + 1.695482y — 0.04648492x + 2.0y? + 3.59885y? — 0.0232425y)” — 1.2227a° + 40.1006aty + 27.7897a4 + 41.0176 y? + 64.5799a%y + 39.4762a° + 10.9422a7y>— 13.543a7y? — 28.845527y — 0.364625a? + 0.611352ay* + 6.83183ay? + 5.46393ry?+ 0.121746xy + 0.000798008a — 10.6365y° — 11.927y* + 0.190151y? — 0.000392287y? . We obtain the inequalities: − 1.2227x5 + 40.1006x4y + 27.7897x4 + 41.0176x3y2 + 64.5799x3y + 39.4762x3 + 10.9422x2y3− (254)
1706.02515#230
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
232
— 1.2227x° + 27.7897a7 + 41.0176x°%y? + 39.4762x°° — 13.543x7y? — 0.364625x?+ y (40.10062* + 64.5799a° + 10.942227y? — 28.8455a? + 6.831832y? + 0.1217462 — 10.6365y* + 0.190151y”) + 0.611352xry* + 5.46393xy? + 0.0007980082 — 11.927y* — 0.000392287y" > — 1.22272" + 27.78972* + 41.0176 - (0.0)?2* + 39.4762x°° — 13.543 - (0.1)?2? — 0.364625x? — 0.1 - (40.10062* + 64.5799x* + 10.9422 - (0.1)?a? — 28.8455x? + 6.83183 - (0.1)?a + 0.121746 + 10.6365 - (0.1)* + 0.190151 - (0.1)?) + 0.611352 - (0.0)4a +
1706.02515#232
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
234
We used 24.7796 - (20)* — 1.2227 - (20)® = 52090.9 > 0 and a < 20. We have proofed the last inequality > 0 of Eq. (248). Consequently the derivative is always positive independent of y, thus (etv)? —, (aty (22+y)? 2) e 2 erfe —2e7 2 ~ erfe 255 (4) (St eo) is strictly monotonically increasing in x. The main subfunction is smaller than zero. Next we show that the sub-function Eq. (101) is smaller than zero. We consider the limit: . (ety)? a+y @ety)? | (Qr4+ “) lim e 2 ~ erfe — 2e7 2= ~ erfc =~) =0 256 e090 ( V2Vz ) ( V2Vz °) The limit follows from Lemma 22. Since the function is monotonic increasing in x, it has to approach 0 from below. Thus, zty)? c ety)? 2a e 3 erfe (34) — 26 erfe A) (257) is smaller than zero.
1706.02515#234
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
235
zty)? c ety)? 2a e 3 erfe (34) — 26 erfe A) (257) is smaller than zero. Behavior of the main subfunction with respect to y at minimal x. We now consider the deriva- tive of sub-function Eq. (101) with respect to y. We proofed that sub-function Eq. (101) is strictly monotonically increasing independent of y. In the proof of Theorem 16, we need the minimum of sub-function Eq. (101). Therefore we are only interested in the derivative of sub-function Eq. (101) with respect to y for the minimum x = 12/10 = 1.2 Consequently, we insert the minimum x = 12/10 = 1.2 into the sub-function Eq. (101). The main terms become √ 1.2 √ 2 x + y √ √ x 2 y + 1.2 √ √ 1.2 2 y √ 5y + 6 √ 15 2 √ = = + = 2 1.2 (258) and 2x + y √ √ x 2 = y + 1.2 · 2 √ √ 1.2 2 = √ y √ 2 1.2 + √ 1.2 √ 2 = 5y + 12 √ 15 2 . (259)
1706.02515#235
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
236
Sub-function Eq. (101) becomes: an) 8) aim) 2 vet to Yi tVv2,/2 la BO erfc y + VO | oe \ VR vi erfc y + va? . (260) 62 The derivative of this function with respect to y is √ 2V15 VI5 VI5m (c2r(v+)*(5y + 6) erfe (BEE) — 2ear(u+12)* (5y + 12) erfe (S42) ) + 30 (261) 6V 157 We again will use the approximation of Ren and MacKenzie [30] ez2 erfc(z) = √ 2.911 √ π(2.911 − 1)z + πz2 + 2.9112 . (262) Therefore we first perform an error analysis. We estimated the maximum and minimum of √ √
1706.02515#236
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
237
Therefore we first perform an error analysis. We estimated the maximum and minimum of √ √ 2-2.911(5y + 12 2.911(5y +6 Vi50 o11(5y +12) - - 911(5y +6) : Vi(2.911-1)(5y+12) , syt12\* , 2 Vm(2.911-1)(5y+6) | sy+6 , 9 1 (3 2) + 2.911 1) m (248) + 2.911 (263) 5y +6 5 _ (Sy +12 V150 (cory + 6) erfe (2 + ) - Jens (u+12)" (59) + 12) erfe (4 )) + 30. 2/15 2V15 + 30 + We obtained for the maximal absolute error the value 0.163052. We added an approximation error of 0.2 to the approximation of the derivative. Since we want to show that the approximation upper bounds the true expression, the addition of the approximation error is required here. We get a sequence of inequalities: 5 6 5 _ {5 12 V150 (cory + 6) erfe (2 + ) - Jens (u+12)" (59) + 12) erfe (% + )) + 30. < 2/15 15 √
1706.02515#237
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
238
2/15 15 (264) Jibn 2.911(5y + 6) _ 2-2.911(5y + 12) 2 2 Vm(2.911-1)(5y+6) , / (5y+6\~ 4 2 vR(2.911-1)(5y+12) Syt12)" | OTE 7(3 is) + 2.911 we | 7(2 2) + 2.911 30+0.2 = (30 - 2.911)(5y + 6) _ 2(30 - 2.911)(5y + 12) . . 2 (2.911 — 1)(5y + 6) 4 ou + 6)2 4 (2452011) (2.911 — 1)(5y + 12) 4 eu +12)? 4 30+0.2 = 2 (0.2 + 30) | (2.911 — 1)(5y + 12) + | (5y +12)? 4 (ae 2) Vi (2.911 — 1)(5y + 6) + 4| (5y + 6)? 4 2 (Ae) Wa 2 2-30-2.911(5y +12) | (2.911 — 1)(5y +6) 4 | (5y + 6)? 4 (=) | 2/15 - 2.911 ; 2.911
1706.02515#238
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
240
+ 2.9112 + √ (24520) 15·2.911 √ π 2 + 63 2 2/15 - zu) (2.911 — 1)(5y + 6) + 4| (5y + 6)? 4 ( Va -1 <0. 2 2V15 - 2.911 Vi (2.911 — 1)(5y + 12) + 4} (5y + 12)? 4 ( We explain this sequence of inequalities. • First inequality: The approximation of Ren and MacKenzie [30] and then adding the error bound to ensure that the approximation is larger than the true value. √ √ • First equality: The factor 2 15 and 2 π are factored out and canceled. • Second equality: Bringing all terms to the denominator 2 52.9 we) (265) (2.911 — 1)(5y + 6) + 4| (5y + 6)? 4 ( Ti 2 2V15 - 2.911 Vi (2.911 — 1)(5y + 12) + 4} (5y + 12)? 4 ( • Last inequality < 0 is proofed in the following sequence of inequalities. We look at the numerator of the last term in Eq. (264). We have to proof that this numerator is smaller than zero in order to proof the last inequality of Eq. (264). The numerator is
1706.02515#240
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
241
2 2/15 - 2.911 VI5-2.9 ) (266) (0.2 + 30) | (2.911 — 1)(5y + 12) + 4} (5y + 12)? 4 ( Vi 2 2V15 - 2.911 _ Vi (2.911 — 1)(5y + 6) + ,| (5y + 6)? 4 ( 2 eo) 2-30 -2.911(5y + 12) | (2.911 — 1)(5y + 6) + 4} (5y + 6)? 4 ( Vi 2 2V15 ze) 2.911 - 30(5y + 6) | (2.911 — 1)(5y + 12) + | (Sy +12)? 4 ( Vi We now compute upper bounds for this numerator: (267) 2 2/15 - 2.911 Vr (0.2 + 30) | (2.911 — 1)(5y + 12) + | (5y + 12)? 4 ( 2 2,15 - a) (2.911 — 1)(5y + 6) + ,| (5y + 6)? 4 ( Vi 64 (266)
1706.02515#241
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
242
2 eo) 2-30 -2.911(5y + 12) | (2.911 — 1)(5y + 6) + 4} (5y + 6)? 4 ( Vi 5 2.911 - 30(5y +6) { (2.911 — 1)(5y + 12) + «| (5y +12)? 4 (=) — 1414.99? — 584.739 \/(5y + 6)? + 161.84y + 725.211 \/(5y + 12)? + 161.84y— 5093.97y — 1403.37,\/ (Sy + 6)? + 161.84 + 30.2\/(5y + 6)? + 161.84,/(5y + 12)? + 161.844 870.253\/(5y + 12)? + 161.84 — 4075.17 < — 1414.99? — 584.739 \/(5y + 6)? + 161.84y + 725.211 \/(5y + 12)? + 161.84y— 5093.97y — mw 6 +5-(—0.1))? + 161.84 + 30.2\/(6 + 5 - 0.1)? + 161.84,/(12 + 5 - 0.1)? + 161.844 870.253,/(12 +5 = + 161.84 —
1706.02515#242
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
244
4} (5y + 6)? 4 For the first inequality we choose y in the roots, so that positive terms maximally increase and negative terms maximally decrease. The second inequality just removed the y2 term which is always negative, therefore increased the expression. For the last inequality, the term in brackets is negative for all settings of y. Therefore we make the brackets as negative as possible and make the whole term positive by multiplying with y = −0.1. Consequently iv? , (aty (2e+y)? 2) e 2 erfe | —-— ] — 2e” 2 ~ erfe | —— 268 (4) (St ees) is strictly monotonically decreasing in y for the minimal x = 1.2. Lemma 45 (Main subfunction below). For 0.007 < x < 0.875 and —0.01 < y < 0.01, the function iv? , (aty (2e+y)? mea) e 2 erfe —2e7 2 ~ erfe | —— 269 (258) (at om
1706.02515#244
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
245
iv? , (aty (2e+y)? mea) e 2 erfe —2e7 2 ~ erfe | —— 269 (258) (at om smaller than zero, is strictly monotonically increasing in x and strictly monotonically increasing in y for the minimal x = 0.007 = 0.00875 · 0.8, x = 0.56 = 0.7 · 0.8, x = 0.128 = 0.16 · 0.8, and x = 0.216 = 0.24 · 0.9 (lower bound of 0.9 on τ ). Proof. We first consider the derivative of sub-function Eq. (111) with respect to x. The derivative of the function (ety)? wt) (22+y)? CG) e 2 erfc —2e =~ erfe (270) (Se Vive with respect to x is (ety? (22+y)? 6 2 vi (es * (e—y)(u + y) erfe (HHL) — 2€ oe (4a? — y?) exfe (254 ut) ) + VBVz(30 — y) 2/mx? √ (271) √ 2
1706.02515#245
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
246
√ (271) √ 2 a) — Qe a = on + y)(2x — y) erfe (34 2 /rx? vi (eae (x — y)( x+y)erte ($¢ +L) ) + v2val 3x — y) ae √ 65 = = oH a w(2ty)erfe( SH) ne ae” (Qrty)Qx—yerte(2ee) \ ve Vivi Vivi + (Bx —y) V22/rSrx? √ We consider the numerator (ety)? : (2x41 . et Vi e (x — y)(x +y)erfe (+) 7 20a = (20 + y)(2x — y) erfe (24) - (80-4) Vv2vE VaVE ne (272) √ For bounding this value, we use the approximation ez2 erfc(z) ≈ √ 2.911 √ π(2.911 − 1)z + πz2 + 2.9112 . (273)
1706.02515#246
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
247
ez2 erfc(z) ≈ √ 2.911 √ π(2.911 − 1)z + πz2 + 2.9112 . (273) from Ren and MacKenzie [30]. We start with an error analysis of this approximation. According to Ren and MacKenzie (Figure 1), the approximation error is both positive and negative in the range [0.175, 1.33]. This range contains all possible arguments of erfc that we consider in this subsection. Numerically we maximized and minimized the approximation error of the whole expression (ety? aty (2e4y)? E _ ( 2a e (x — y)(a + y) erfc (#4) 2e (2a — y)(2a + y) erfc (234) (ety? aty (2e4y)? E _ ( 2a Bley) = e (x — y)(a + y) erfc (#4) - 2e (2a — y)(2a + y) erfc (234) Vive Ve −
1706.02515#247
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
248
− 2.911(x — y)(a@+y) 2 viva ( vraou—iety) ; m (4) r20u1) 2-2.911(2x — y) (2x + y) Ve(2.911-1)(2e+y) | 2ety \* | 2 viva ( JET m (25x) r20u1) (274) We numerically determined —0.000228141 < E(a,y) < 0.00495688 for 0.08 < a < 0.875 and —0.01 < y < 0.01. We used different numerical optimization techniques like trradient based constraint BFGS algorithms and non-gradient-based Nelder-Mead methods with different start points. Therefore our approximation is smaller than the function that we approximate. We use an error gap of −0.0003 to countermand the error due to the approximation. We have the sequences of inequalities using the approximation of Ren and MacKenzie [30]: (3x − y) + # e (x+y)2 2x # . (EH # y)erte (%) # (x − y)(x + y) erfc √ 2 # x √ √ 2 # x − # 2e (2x+y)2 2x (2x − y)(2x + y) erfc √ 2 √
1706.02515#248
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
250
2(2x — y)(2a + y)2.911 2 Qaty : _ (2.911-1) VF(20-+y) (v2vz) (V=(2) + 2.9112 4 Re ) Vm — 0.0003 = (30 —y) 4 (V2V/#2.911) (w — y)(w« +y) _ ( n(x + yy? +2- 2.91122 + (2.911 — 1)(@ + wv7) (Vv2Vz) 2(2x — y) (2a + y) (V2Vr2.911) (V2Vz) ( TQe+ yj? +2- 291120 + (2.911 — 1)(2x + y)v7) (3a — y) +2.911 (c«-y)\(@+y) _ (2.911 —1)(a@+y)+/(a+y)? + 222 7 2(2x — 22 Cr=wCrty | _ 9.0903 5 (2.911 — 1)(2% + y) 4 (Qe fy)? + 2201122 T Vix — 0.0003 (32 — y) +2.911 (x= y)(@ +) (2.911 —1)(@+y)4
1706.02515#250
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
251
+ 2201122 T Vix — 0.0003 (32 — y) +2.911 (x= y)(@ +) (2.911 —1)(@+y)4 Jee) + (a fy)? + Zeolite 4 2-2.9112y ' Tw 220 y)@rty) |) _ n.q903 = (2.911 — 1)(22+y) 4 (ex | y)? + 22anite (32 —y) 4 vm (= y)(@+y) _ (2.911 —1)(a+y)+ (wt y+ zg)? 2(2x — y) (2a Qe-wr+y) | _ 9.9903 = (2.911 — 1)(22+y) 4 (ex | y)? + 22onlte (3 — y) + 2.911 (e=w(e+y) 2(2a — y)(2x + y) 2.911 (x + y) + 2212 (2.911 —1)(22 + y) + \/ (2x + y)? + 220e ; _ (c-y)(uty) 2(2x — y)(2x + y)2.911 : (Bx) + Cy Bam ——— ~ 0.0003 TT Y)F Me (2.911 —1)(Qa + y)
1706.02515#251
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
254
(—82° 8a2y + 4x? / (2x + y)? + 5.39467x — 10.9554? + 2ay? — 2y?\/(Qx + y)? + 5.394672 + 1.76901ay + Qayy/(2x + y)? + 5.394672 + 2.77952\/ (2x + y)? + 5.394672 — 0.9269y\/ (2x + y)? + 5.39467a — 0.00027798\/ (2x + y)? + 5.39467a — 0.00106244x + 2y? + 3.62336y? — 0.00053122y) - -1 ((« ty) 4 =) (em I(2e +y) + 4/ (Qe +y)24 vanes) (—82° + (4x? + 2xy + 2.77952 — 2y? — 0.9269y — 0.00027798) \/(2x + y)? + 5.39467a — 8a°y — 10.9554a? + 2ey? + 1.7690 Ly — 0.001062442 + 2y* + 3.62336y? — 0.00053122y) - -1 ((« ty) 4 =) (em
1706.02515#254
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
256
• First inequality: The approximation of Ren and MacKenzie [30] and then subtracting an error gap of 0.0003. √ √ 2 • Equalities: The factor x is factored out and canceled. • Second inequality: adds a positive term in the first root to obtain a binomial form. The term containing the root is positive and the root is in the denominator, therefore the whole term becomes smaller. Equalities: solve for the term and factor out. e Bringing all terms to the denominator ((x + y) + 2911) (ou —1)Qa+y)+/(Qr+y)? 4 sage), • Equalities: Multiplying out and expanding terms. • Last inequality > 0 is proofed in the following sequence of inequalities. We look at the numerator of the last expression of Eq. (275), which we show to be positive in order to show > 0 in Eq. (275). The numerator is 82° + (42? + 2ey + 2.77952 — 2y? — 0.9269y — 0.00027798) 2x + y)? + 5.394672 — (276)
1706.02515#256
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
258
The factor 4x? + 2ry + 2.7795a — 2y? — 0.9269y — 0.00027798 in front of the root is positive: da? + 2ay + 2.77952 — 2y” — 0.9269y — 0.00027798 > (277) —2y? + 0.007 - 2y — 0.9269y + 4 - 0.007? + 2.7795 - 0.007 — 0.00027798 = —2y? — 0.9129y + 2.77942 = —2(y + 1.42897)(y — 0.972523) >0. If the term that does not contain the root would be positive, then everything is positive and we have proofed the the numerator is positive. Therefore we consider the case that the term that does not contain the root is negative. The term that contains the root must be larger than the other term in absolute values. — (-827 — 8a?y — 10.9554x? + 2xy” + 1.76901 xy — 0.001062442 + 2y° + 3.62336y” — 0.00053122y) < (277)
1706.02515#258
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
259
(277) — (-827 — 8a?y — 10.9554x? + 2xy” + 1.76901 xy — 0.001062442 + 2y° + 3.62336y” — 0.00053122y) < (278) (278) (4a? + Qey + 2.7795a — 2y? — 0.9269y — 0.00027798) V/(2a + y)? + 5.394672 . Therefore the squares of the root term have to be larger than the square of the other term to show > 0 in Eq. (275). Thus, we have the inequality: (—82° — 82?y — 10.9554x” + 2axy + 1.76901xy — 0001062442 + 2y° + 3.62336y? — 0.00053122y)” < (279) 68 . (4a? + 2ay + 2.7795a — 2y? — 0.9269y — 0.00027798)” ((2x + y)? + 5.394672) . This is equivalent to
1706.02515#259
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
260
This is equivalent to 0 < (42? + 2xy + 2.77952 — 2y? — 0.9269y — 0.00027798)” (2 + y)? + 5.394672) — (280) —8x° — 827y — 10.9554x? + 2ary? + 1.76901 xy — 0.00106244a + 2y° + 3.62336y? — 0.00053122y)” x - 4168614250 - 10-” — y?2.049216091 - 10-7 — 0.0279456a°-+ 43.087524y + 30.81132* + 43.10842°%y? + 68.9892 y + 41.63572° + 10.792827y? — 13.172627y?— 27.814827y — 0.00833715x? + 0.0139728ay* + 5.47537xry>+ 4.65089xy? + 0.00277916xy — 10.7858y° — 12.2664y* + 0.00436492y° . We obtain the inequalities:
1706.02515#260
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
261
x · 4.168614250 · 10−7 − y22.049216091 · 10−7 − 0.0279456x5+ 43.0875x4y + 30.8113x4 + 43.1084x3y2 + 68.989x3y + 41.6357x3 + 10.7928x2y3− 13.1726x2y2 − 27.8148x2y − 0.00833715x2+ 0.0139728xy4 + 5.47537xy3 + 4.65089xy2 + 0.00277916xy − 10.7858y5 − 12.2664y4 + 0.00436492y3 > x · 4.168614250 · 10−7 − (0.01)22.049216091 · 10−7 − 0.0279456x5+ 0.0 · 43.0875x4 + 30.8113x4 + 43.1084(0.0)2x3 + 0.0 · 68.989x3 + 41.6357x3+ 10.7928(0.0)3x2 − 13.1726(0.01)2x2 − 27.8148(0.01)x2 − 0.00833715x2+
1706.02515#261
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
263
We used x > 0.007 and x < 0.875 (reducing the negative x*-term to a x?-term). We have proofed the last inequality > 0 of Eq. (275). Consequently the derivative is always positive independent of y, thus aty)? ety)? 2. e om erfc (4) - 2" a erfc ( ot 7) (282) V2fe is strictly monotonically increasing in x. Next we show that the sub-function Eq. (111) is smaller than zero. We consider the limit: . (ety)? aty (ety)? Qa + “) lim e 2 ~ erfe — 2e7 2 ~ erfc =~) =0 283 e090 ( V2Vz ) ( V2Vz es) The limit follows from Lemma 22. Since the function is monotonic increasing in x, it has to approach 0 from below. Thus, (ety? , faty (22+)? +) e 2= erfe —2e~ 22 ~ erfe 284 (55) (aA “ is smaller than zero.
1706.02515#263
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
264
(ety? , faty (22+)? +) e 2= erfe —2e~ 22 ~ erfe 284 (55) (aA “ is smaller than zero. We now consider the derivative of sub-function Eq. (111) with respect to y. We proofed that sub- function Eq. (111) is strictly monotonically increasing independent of y. In the proof of Theorem 3, we need the minimum of sub-function Eq. (111). First, we are interested in the derivative of sub- function Eq. (111) with respect to y for the minimum x = 0.007 = 7/1000. 69 = Consequently, we insert the minimum x = 0.007 = 7/1000 into the sub-function Eq. (111): √ (ae) vais e\¥?V T0005 erfc (285) Yi4y 7 2 V3) ahs v2 2 eye =a) v2, /7 1000 1000 ee tute erfe ( + ‘) _ 2% (s00y-t7)? erfe (a + ") ; 20V'35 10V35 The derivative of this function with respect to y is
1706.02515#264
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
265
The derivative of this function with respect to y is (~~ 4 1) ee tut ote erfe (= + “) _ (286) 7 20V35 1, coma? (500y + 7) erfe 500y+7)\ | 20 5 S 7 10/35 in (: + 1000 - cont) eA O.01+ ggg + OG HOD? (2 + 1000 + conn) _ 7 20/35 1 cruso0.0012 500 - 0.01 5 ode so (7+ 500 0.01) ere ( 00 0 ) +20,/=- > 3.56. 7 10V35 (Gs For the first inequality, we use Lemma 24. Lemma 24 says that the function xex2 erfc(x) has the sign of x and is monotonically increasing to 1√ π . Consequently, we inserted the maximal y = 0.01 to make the negative term more negative and the minimal y = −0.01 to make the positive term less positive. Consequently zty)? c ety)? 2a e 3 erfe (4) — 26 erfe CG) (287) is strictly monotonically increasing in y for the minimal x = 0.007.
1706.02515#265
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
266
is strictly monotonically increasing in y for the minimal x = 0.007. Next, we consider x = 0.7 · 0.8 = 0.56, which is the maximal ν = 0.7 and minimal τ = 0.8. We insert the minimum x = 0.56 = 56/100 into the sub-function Eq. (111): √ (are) g(a. VE e\V7V tot ° erfc y + (288) V3, [56 V2 100 √ 2 2 (ate ive) Qe\ ¥?V 105 erfc J2 56 100 00 The derivative with respect to y is: solar) (24 +) ente (2% + ¥2) : oy + ¥ 27! 5 a - (289) Loe)” (2+) erte(S +) 5 A tae > pel ‘F~2R) (2 - 3255) erfe (4F - 2058) vi 70 √ 27740015)"
1706.02515#266
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
267
70 √ 27740015)" 5 + 0.01·5 2 √ 7 For the first inequality we applied Lemma 24 which states that the function xex2 erfc(x) is monotoni- cally increasing. Consequently, we inserted the maximal y = 0.01 to make the negative term more negative and the minimal y = −0.01 to make the positive term less positive. Consequently iv? , (aty Qxty)? =) e 2 erfe | —~— ]} — 2e” 2 ~ erfe | —— 290 ( V2Se ) ( v2Va ow is strictly monotonically increasing in y for x = 0.56. Next, we consider x = 0.16 · 0.8 = 0.128, which is the minimal τ = 0.8. We insert the minimum x = 0.128 = 128/1000 into the sub-function Eq. (111): √ 2 ( yy a) mee 128 . 000 e\ 2 V t600 ve erfc 128 aie (=e) 5 [BS #28. 1000 CBE ts enfe (tm) 20 ee oi + ~*) ; - (291) 2 The derivative with respect to y is:
1706.02515#267
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
268
2 The derivative with respect to y is: 1 125y? 125y + 16 — (et tut os 16 (« (125y + 16) erfe (Gus Ovi0 )- (292) wm) *? es ") ° 1 («x \ snt-oonpe-t0rde onfe (* + 5000) _ (125y+32)? G25y 432)" . (125 32 2e~ 40 (125y 4 22) ene (= a 16 20/10 : 5 2000 (32 + 1250.01) erfe (a) +20)/22) > o.446s . 20V10 T For the first inequality we applied Lemma 24 which states that the function xex2 erfc(x) is monotoni- cally increasing. Consequently, we inserted the maximal y = 0.01 to make the negative term more negative and the minimal y = −0.01 to make the positive term less positive. Consequently tw? ay) (22-4)? CG) e 2 erfe ( 2e~ 2= ~ erfe | —— (293) is strictly monotonically increasing in y for x = 0.128.
1706.02515#268
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
269
is strictly monotonically increasing in y for x = 0.128. Next, we consider x = 0.24 · 0.9 = 0.216, which is the minimal τ = 0.9 (here we consider 0.9 as lower bound for τ ). We insert the minimum x = 0.216 = 216/1000 into the sub-function Eq. (111): √ — A y__, V toon ) 216 as, Vz . 1000 ve erfc - (294) 5 [26 1000 √ 2 # (ae) √ √ # y 216 1000 + √ 216 1000 2 # 2e # erfc √ zl 216 + √ eo 1000 = 71 (291) G25yt27)2, ( 125y + 27 G2zsyts4y2 — ( 125y + 54 e 6750 erfe ( 2 © =" ) _ 2¢ e750 — erfe ( “2 ES 15/30 15/30 The derivative with respect to y is:
1706.02515#269
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
270
The derivative with respect to y is: 1 (125y-427)2 125y + =") — { e3750 — (125y + 27) erfe ( ——7> — ] — 295 7 ( (125y + 27) ( 15/30 >) a2syes4y? _ (125y + 2) 30 2e 6750 125y + 54) erfe +15 > (125y + 54) ( 15V30 7 1 (274125(-0.01))2 | (274 a) — | (274+ 125(—0.01))e 6750 erfe | ————————_ ] - 7 (« (—0.01)) ( 15/30 5441280.01)? 5 . : 20 ee (54 4 1250.01) erfe (qa) + 15/9 ) > 0.211288 . 1530 T For the first inequality we applied Lemma 24 which states that the function xex2 erfc(x) is monotoni- cally increasing. Consequently, we inserted the maximal y = 0.01 to make the negative term more negative and the minimal y = −0.01 to make the positive term less positive. Consequently
1706.02515#270
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
271
Consequently iv? , (aty Qxty)? =) e 2 erfe —2e~ 2 ~ erfe 296 ( V2Se ) ( v2Va oo) is strictly monotonically increasing in y for x = 0.216. Lemma 46 (Monotone Derivative). For X = Aoi, @ = Qo and the domain —0.1 < pw < 0.1, —0.1 <w < 0.1, 0.00875 < v < 0.7, and 0.8 < T < 1.25. We are interested of the derivative of T (a) erfc (“*) - gel “AHF ) erfe (A) . (297) # τ The derivative of the equation above with respect to • ν is larger than zero; e 7 is smaller than zero for maximal v = 0.7, v = 0.16, and v = 0.24 (with 0.9 < T); • y = µω is larger than zero for ντ = 0.00875 · 0.8 = 0.007, ντ = 0.7 · 0.8 = 0.56, ντ = 0.16 · 0.8 = 0.128, and ντ = 0.24 · 0.9 = 0.216.
1706.02515#271
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
272
Proof. We consider the domain: —0.1 < uw < 0.1, -0.1 < w < 0.1, 0.00875 < v < 0.7, and 0.8<¢7 < 1.25. We use Lemma|I7]to determine the derivatives. Consequently, the derivative of r (a) erfe (“*) acl FE) erte (“*)) # τ with respect to ν is larger than zero, which follows directly from Lemma 17 using the chain rule. Consequently, the derivative of (<(a") erfc (“*) _ ae a) erfc (“5 ~*)) (299) # τ with respect to y = µω is larger than zero for ντ = 0.00875 · 0.8 = 0.007, ντ = 0.7 · 0.8 = 0.56, ντ = 0.16 · 0.8 = 0.128, and ντ = 0.24 · 0.9 = 0.216, which also follows directly from Lemma 17. We now consider the derivative with respect to τ , which is not trivial since τ is a factor of the whole expression. The sub-expression should be maximized as it appears with negative sign in the mapping for ν. 72 (298)
1706.02515#272
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
273
72 (298) First, we consider the function for the largest ν = 0.7 and the largest y = µω = 0.01 for determining the derivative with respect to τ . The expression becomes (Ee) ry 4 (# ih ea ) ot 1 TO. Too c T r v2/%5) rfc | 12 100 | _oe\ V7 / erfe | 20-7 100 ; (300) The derivative with respect to 7 is (7or+1)? (7or+1)? . 707 +1 m | e 10007 (7007(77 + 20) — 1) erfe {| ———— } - 301 (ve( ( )-1) (vas) G01) 20“ (28007(7r +5) — L) erfe ( Mor +1 )) + 20/35(2107 — tv) 20V35/T)) (14000 Vm) ~* We are considering only the numerator and use again the approximation of Ren and MacKenzie [30]. The error analysis on the whole numerator gives an approximation error 97 < E < 186. Therefore we add 200 to the numerator when we use the approximation Ren and MacKenzie [30]. We obtain the inequalities:
1706.02515#273
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
274
Or (7or+1 : 707 +1 a ( @Tra807 7007 (77 + 20) — 1) erfe va *(700r( )-0) (was) 1407 +1 20V35,/T 20 Et (28007(Tr +5) —lerfe ( )) + 20V35(2107 — 1) V7 < Vi 2.911(7007 (77 + 20) — y _ Va(2.911—-1)(707+1) 7Or+1 ' 2 20V35/T ryt (x sh) + 2.911 2. 2.911(28007(77 +5) — 1) 2 Vi(2.911-1)(1407+1) | 140741 1 2 20735 /7 ryt (4) + 2.911 + 20V35(2107 — 1),/7 + 200 = Vi (7007(77 + 20) — 1) (20- V35 - 2.911/7) _ V/n(2.911 — 1)(707 +1) + V0 . 2.911V35V7)" +7(707 + 1)? 2(2800r(7r +5) — 1) (20- V35- 2.911,/7) Vr(2.911 — 1)(1407 + 1) + (20: V35 - 2.911
1706.02515#274
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
276
(302) + π(70τ + 1)2 ) √ √ V72- 20 - 35 - 2.911(28007 (77 +5) — 1) vr (vaeon —1)(70r +1) + y (20 . 35 - 20llyF) + (707 + 1) ((vaeon —1)(70r +1) + (cove 2.911- vi). + -n(707 + ») -1 (vaeon —1)(1407 +1) + y (cove -2.911- vi) + m(1407 + )) . After applying the approximation of Ren and MacKenzie [30] and adding 200, we first factored out 20 We now consider the numerator: (20v35(2 Or — 1) Vr+ 200) (vem —1)(70r +1) + (20: V35 + 2. ouiy7) + n(707 +1) (303)
1706.02515#276
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
277
(303) (vaem —1)(1407 +1) + y (2 -V35 2.9 IVF). m (1407 ») + 2.911 - 20V35Vx(7007 (77 + 20) — 1) V7 (vaem ~1)(1407 +1) + (20 . 35 - 2.9 17). (1407 v) - Vr2- 20 - 35 - 2.911(28007 (77 +5) — 1) V7 (vaem —1)(707 + 1) + y (eo . 352.91 vi). + (707 + 0) = — 1.70658 x 10° (707 + 1)? + 1186357 79/2 + 200V35\/m(70r + 1)? + 1186357 V/(1407 + 1)? + 118635773/? + 8.60302 x 10° \/7(1407 + 1)? + 118635r77/? — 2.89498 x 10779/? — .21486 x 107 \/x(707 + 1)? + 11863577°/? + 8.8828 x 10° \/n (1407 + 1)? + 11863577°/? — 2.43651 x 10775/? — 1.46191 x 10°77/? + 2.24868 x 1077? + 94840.5./2(707 +
1706.02515#277
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
278
— 2.43651 x 10775/? — 1.46191 x 10°77/? + 2.24868 x 1077? + 94840.5./2(707 + 1)? + 11863577 + 47420.2/ (1407 + 1)? + 11863577 + 4818607 + 710.354V7 + 820.213,/7 /0(707 + 1)? + 1186357 + 677.432 \/n(707 + 1)? + 1186357 — 011.27 V7 /n(1407 + 1)? + 1186357 — 20V35/7 (707 + 1)? + 1186357 \/7 (1407 + 1)? + 1186357 + 200/71 (707 + 1)? + 1186357 (1407 + 1)? + 1186357 + 677.432,/7 (1407 + 1)? + 1186357 + 2294.57 = — 2.89498 x 107r9/? — 2.43651 x 10779/? — 1.46191 x 10°77/? + s (-1.70658 x 107r9/? — 1.21486 x 1077°/? + 94840.57 + 820.213/7 + 677.432) m(707 + 1)? + 1186357 + (8.60302 x 10°79/? + 8.8828 x 10°r5/? +
1706.02515#278
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
281
— 2.89498 x 10773/? — 2.43651 x 1077°/? — 1.46191 x 1097 7/24 (—1.70658 x 10773/? — 1.21486 x 10775/? + 820.213V1.25 + 1.25 - 94840.5 + 677.432) m(707 + 1)? + 1186357+ (8.60302 x 10°79/? + 8.8828 x 10°r5/? — 1011.27V0.8 + 1.25 - 47420.2 + 677.432) s/m(1407 + 1)? + 1186357+ (4200 3573/2 — 20V35 V7 + 200) /m(70r + 1)? + 1186357 (1407 + 1)? + 1186357+ 2.24868 x 10"r? + 710.354V1.25 + 1.25 - 481860 + 2294.57 = — 2.89498 x 10779/? — 2.43651 x 10779/? — 1.46191 x 1097 7/24 —1.70658 x 10°r3/? — 1.21486 x 1077>/? + 120145.) m(707 + 1)? + 1186357+ 8.60302 x 10°79/? + 8.8828 x 10°7°/? +
1706.02515#281
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
282
+ 120145.) m(707 + 1)? + 1186357+ 8.60302 x 10°79/? + 8.8828 x 10°7°/? + 59048.2) m(1407 + 1)? + 1186357+ 4200V357°/? — 20V35/7 + 200) Va(70r + 1)? + 1186357 (1407 + 1)? + 11863574 2.24868 x 10°r? + 605413 = — 2.89498 x 10773/? — 2.43651 x 107r°/? — 1.46191 x 1097 7/24 8.60302 x 10°7/? + 8.8828 x 10°r°/? + 59048.2) s/196007(r + 1.94093)(7 + 0.0000262866)+ —1.70658 x 10°r3/? — 1.21486 x 1077>/? + 120145.) 9/4900 (7 + 7.73521) (7 + 0.0000263835)-+ 4200V3573/2 — 20/357 + 200) s/196007(r + 1.94093) (7 + 0.0000262866) \/49007(7 + 7.73521) (7 + 0.0000263835)+ 2.24868 x
1706.02515#282
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
283
(7 + 0.0000262866) \/49007(7 + 7.73521) (7 + 0.0000263835)+ 2.24868 x 10'r? + 605413 < — 2.89498 x 10773/? — 2.43651 x 107r°/? — 1.46191 x 1097 7/24 (8.60302 x 10°79/? + 8.8828 x 1087°/? + 59048.2) 196007 (7 + 1.94093)7+ (-1.70658 x 10%r9/? — 1.21486 x 10779/? + 120145.) 949007 1.00003(7 + 7.73521)7+ (4200 3573/2 — 20V35V7 + 200) 4/1960071.00003(7 + 1.94093)r s/490071.00003(r + 7.73521)T+ 2.24868 x 10°r? + 605413 = — 2.89498 x 107r3/? — 2.43651 x 1077>/? — 1.46191 x 1097/24
1706.02515#283
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
286
2.89498 x 10°7r3/? — 2.43651 x 1077°/? — 1.46191 x 109r7/? + 2.24868 x 1077? + 605413 = — 4.84561 x 1077/2 + 4.07198 x 10°7°/? — 1.46191 x 10977/2— 4.66103 x 10°? — 2.34999 x 10°7?+ 3.29718 x 10°r + 6.97241 x 10’ \/7 + 605413 < 60541373/? 0.83/2 4.07198 x 109r°/? — 1.46191 x 10°77/?— 3.29718 x LO" /7r 6.97241 x 10% r/r V0.8 0.8 73/2 (—4.66103 x 1083/2 — 1.46191 x 1097? — 2.34999 x 10°V/7+ — 4.84561 x 1073/24 4.66103 x 10°? — 2.34999 x 10°7? 4 4.07198 x 10°r + 7.64087 x 107) < 7 7 ee (~s.00103 x 10%r4/2 — 1.46191 x 10%7? 4 TOAST x10" V7 v0.8
1706.02515#286
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
288
− 4.14199 × 107τ 2 < 0 . First we expanded the term (multiplied it out). The we put the terms multiplied by the same square root into brackets. The next inequality sign stems from inserting the maximal value of 1.25 for 7 for some positive terms and value of 0.8 for negative terms. These terms are then expanded at the =-sign. The next equality factors the terms under the squared root. We decreased the negative term by setting T = 7 + 0.0000263835 under the root. We increased positive terms by setting tT + 0.000026286 = 1.000037 and 7 + 0.000026383 = 1.000037 under the root for positive terms. The positive terms are increase, since 9-8+0-000026383 — 1 (0003, thus r + 0.000026286 < r + 0.000026383 < 1.00003r. For the next inequality we decreased negative terms by inserting 7 = 0.8 and increased positive terms by inserting 7 = 1.25. The next equality expands the terms. We use upper bound of 1.25 and lower bound of 0.8 to obtain terms with corresponding exponents of T. For the last <-sign we used the function −1.46191 × 109τ 3/2 + 4.07198 × 109√
1706.02515#288
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
289
−1.46191 × 109τ 3/2 + 4.07198 × 109√ τ − 4.66103 × 108τ − 2.26457 × 109 (304) The derivative of this function is −2.19286 × 109√ τ + 2.03599 × 109 √ τ − 4.66103 × 108 (305) and the second order derivative is − 1.01799 × 109 τ 3/2 − 1.09643 × 109 √ τ < 0 . (306) The derivative at 0.8 is smaller than zero: − 2.19286 × 109 √ 0.8 − 4.66103 × 108 + 2.03599 × 109 0.8 √ = (307) − 1.51154 × 108 < 0 . Since the second order derivative is negative, the derivative decreases with increasing τ . Therefore the derivative is negative for all values of τ that we consider, that is, the function Eq. (304) is strictly monotonically decreasing. The maximum of the function Eq. (304) is therefore at 0.8. We inserted 0.8 to obtain the maximum. 76 Consequently, the derivative of 1 (CY ete (“*) 96 AE). ont fc e(“*)) (308) with respect to τ is smaller than zero for maximal ν = 0.7.
1706.02515#289
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
290
with respect to τ is smaller than zero for maximal ν = 0.7. Next, we consider the function for the largest ν = 0.16 and the largest y = µω = 0.01 for determining the derivative with respect to τ . The expression becomes 16r 1 2 16r 1 2 2167 1 2 (# Too * Too i 16r ( x1 ) 2167 | 7 ; or t r v2/i00/ erfe x00 + 700 100) _ ¢\ v2V/ tt J erfc | 200 = 100 : (309) [167 167 v2 100 “vay 100 The derivative with respect to τ is ( (SS care +25) — Lerfe ( 16741 ) _ G10) 402.7 r+1 327 +1 2¢e S00" (1287 (87 + 25) — 1 erfe + 40\/2(487 — 1) V7 *(128r(r-+ 25) ~ ene (ET) ) + anv as — Dv) (3200 /nr) * We are considering only the numerator and use again the approximation of Ren and MacKenzie [30]. The error analysis on the whole numerator gives an approximation error 1.1 < E < 12. Therefore we add 20 to the numerator when we use the approximation of Ren and MacKenzie [30]. We obtain the inequalities:
1706.02515#290
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
291
vi(¢ OSs (1287 (2 27 + 25) 1) ert ( 16741 ) 40V 2/7 327 +1 mas) + 40V/2(487r —1)/7 < (32741)? . 2e~ 32007 (1287 (87 + 25) — 1) erfe ( 2.911(1287(27 + 25) — 1) Vr 2 Vm(2.911-1)(167+1) | 16741 I 2 40V2/7 ryt (a2¢4) + 2.911 2+ 2.911(1287(87 + 25) — 1) 2 Va(2.911—1)(327+1) , 32741 j 2 40V2V7 ryt (224) + 2.911 + 40V/2(487 — 1) V7 +20 = (1287 (27 + 25) — 1) (40V22.911,/r) Jn (2.911 — 1)(167 + 1) + \ (4ov2.911V7)" + (167 + 1)? 2(1287(87 + 25) — 1) (40,/22.911/7) Vn(2.911 — 1)(327 +1) + \(4ov2.911V7)" + (327 + 1)? 40V/2(487 — 1) /7 +20
1706.02515#291
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
293
(311) 0) + π(16τ + 1)2 77 √ √ 2 √ 2.911 - 40V2V/7 (1287 (27 + 25) — 1) V7 (vaeon ~1)(32r +1) + y (sov olivz) + 1(327 +1) ‘)- 2V/740/22.911 (1287 (87 + 25) — 1) Vr (vieon — 1)(16r +1) +y/ (ove ovr) + (167 + )) (Cae — 1)(327 +1) + y (aova2.011vF)’ + (327 + 0) -1 (vizon —1)(32r +1) + (sov% ouiyF) + (327 +1)? )) . After applying the approximation of Ren and MacKenzie [30] and adding 20, we first factored out 40 We now consider the numerator: 2 (40v2(48r -~vr+ 20) (vaeon — 1)(167 +1) + y (sova2.001y) + m(16r + 0) (312)
1706.02515#293
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
294
(312) A (vaeon —1)(327 +1) + V 2.911 - 40V2V/7 (1287 (27 + 25) — 1) V7 (vaeon —1)(327 +1) + V 4ov22.911V7). + 1(327 + 0) - 2/740 22.911 (1287(87 + 25) — 1)/7 (vacon —1)(167 +1) + / — 1.86491 x 10° (167 + 1)? + 27116.5779/24 1920V2./m(16r + 1)? + 27116.57 V/7(327 + 1)? + 27116.57 79/24 940121 /7(327 + 1)? + 27116.577°/? — 3.16357 x 10°79/?— 303446 7 (167 + 1)? + 27116.577°/? + 221873 ,/7(327 + 1)? + 27116.577°/? — 6085887°/? — 8.34635 x 10°r7/? + 117482.77 + 2167.78\/n(167 + 1)? + 27116.577+ 1083.89 \/7(32r + 1)? + 27116.577+ 11013.97 + 339.614\/F + 392.137, /7\/n(167 + 1)? +
1706.02515#294
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
295
\/7(32r + 1)? + 27116.577+ 11013.97 + 339.614\/F + 392.137, /7\/n(167 + 1)? + 27116.57-+ 67.7432,/m (167 + 1)? + 27116.57 — 483.4787 (327 + 1)? + 27116.57— 40V 2/7 /(167 + 1)? + 27116.57 \/7(327 + 1)? + 27116.57+ 20./ (167 + 1)? + 27116.57 \/1(327 + 1)? + 27116.57+ 67.7432 \/7(327 + 1)? + 27116.57 + 229.457 = — 3.16357 x 10°7°/? — 60858875/? — 8.34635 x 1077/24 (-1.86491 x 1053/2 — 30344675/2 4 2167.787 + 392.137/7 + 67.7432) ov2.911y7). + (327 + 0) + fs oy22.911VF) + n(16r + 0) = m(167 + 1)? + 27116.57+ (94012179? + 2218737°/? + 1083.897 — 483.478,/7 + 67.7432)
1706.02515#295
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
297
m(327 + 1)? + 27116.57 + 1920V2r3/? — 40V 2/7 + 20) s/n (167 + 1)? + 27116.57 V/n(327 + 1)? + 27116.57 + 117482.7? + 11013.97 + 339.6147 + 229.457 < — 3.16357 x 10°r3/? — 6085887°>/? — 8.34635 x 10°r7/24 ~1.86491 x 10°r3/? — 30344675/? + 392.187V/1.25 + 1.252167.78 + 67.7432) s/n (167 + 1)? + 27116.57+ 94012179/? + 2218737°/? — 483.478V0.8 + 1.251083.89 + 67.7432) s/7(827 + 1)? + 27116.57+ 1920V2r9/? — 40V2V7 + 20) (167 + 1)? + 27116.57 (327 + 1)? + 27116.57+ 117482.r? + 339.614V1.25 + 1.2511013.9 + 229.457 = — 3.16357 x 10°r3/? — 6085887°>/? — 8.34635 x 10°r7/24
1706.02515#297
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
298
+ 229.457 = — 3.16357 x 10°r3/? — 6085887°>/? — 8.34635 x 10°r7/24 —1.86491 x 10°r9/? — 30344675/? + 3215.89) s/n(16r + 1)? + 27116.57+ 94012179/? + 2218737°/? + 990.171) m(327 + 1)? + 27116.57+ 1920V2r3/? — 40V 2/7 + 20) s/n (167 + 1)? + 27116.57 V/n(327 + 1)? + 27116.57 + 1174827? + 14376.6 = — 3.16357 x 10°r3/? — 6085887°>/? — 8.34635 x 10°r7/24 94012179/? + 2218737°/? + 990.171) s/10247 (7 + 8.49155)(7 + 0.000115004)+ —1.86491 x 10°79/? — 30344675/? + 3215.89) \/256n(7 + 33.8415)(7 + 0.000115428)+ 1920V2r3/? — 40/2\/7 + 20) s/10247(r + 8.49155)(7 + 0.000115004) »/256n(7 +
1706.02515#298
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
299
— 40/2\/7 + 20) s/10247(r + 8.49155)(7 + 0.000115004) »/256n(7 + 33.8415) (7 + 0.000115428)+ 117482.7? + 14376.6 < — 3.16357 x 10°r3/? — 6085887°>/? — 8.34635 x 10°r7/24 94012179/? + 2218737°/? + 990.171) s/102471.00014(7 + 8.49155)7+ 1920V2r3/? — 40V2/F + 20) 9/25671.00014(7 + 33.8415)7 10247 1.00014 (7 + 8.49155)7-+ ~1.86491 x 10°r3/? — 3034467°/? + 3215.89) \/2560(7 + 33.8415)T+ 117482.7? + 14376.6 = — 3.16357 x 10°7°/? — 60858875/? — 8.34635 x 1077/24
1706.02515#299
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
300
117482.7? + 14376.6 = — 3.16357 x 10°7°/? — 60858875/? — 8.34635 x 1077/24 √ —9100379/? + 4.36814 x 10°79/? + 32174.4r) 1.25852 x 10°73 + 5.33261 x 10’7? + 56165.1/7) —8.60549 x 10°7* — 5.28876 x 10'r? + 91200.4V/r) τ + 33.8415 + 117482.τ 2+ V7 + 8.49155/7 V7 + 8.49155+ Vr + 33.8415 √ τ + 8.49155+ + 33.8415 + 14376.6 < √ −91003τ 3/2 + 4.36814 × 106τ 5/2 + 32174.4τ + 1.25 + 33.8415
1706.02515#300
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
302
79 — 4.84613 x 10%r3/? + 8.01543 x 1077>/? — 8.34635 x 106r7/?— 1.13691 x 107? — 1.44725 x 108774 594875.r + 712078.\/7 + 14376.6 < 14376.673/2 0.8/2 8.01543 x 1077°/? — 8.34635 x 10°r7/2— 594875./Tr | 712078.7./7 vos 0.8 — 3.1311 - 10°r?/? — 1.44725 - 1087? + 8.01543 - 1077°/? — 1.13691 - 10773 8.34635 - 10°77/? < 3.1311 x 10%78/2 4 8.01543 x < 1.2575/? 8.34635 x 10°r7/? — 1.13691 x 1077? — 1.44725 x 108+? = — 3.1311 x 10°r9/? — 8.34635 x 10°r7/? — 1.13691 x 10773 — 5.51094 x 10’772 < 0. — 4.84613 x 10%r3/24 1.13691 x 1077? — 1.44725 x 10°7? 4
1706.02515#302
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
303
First we expanded the term (multiplied it out). The we put the terms multiplied by the same square root into brackets. The next inequality sign stems from inserting the maximal value of 1.25 for 7 for some positive terms and value of 0.8 for negative terms. These terms are then expanded at the =-sign. The next equality factors the terms under the squared root. We decreased the negative term by setting T = 7 + 0.00011542 under the root. We increased positive terms by setting 7 + 0.00011542 = 1.000147 and 7 + 0.000115004 = 1.000147 under the root for positive terms. The positive terms are increase, since 2S+0-00011542 < 1 000142, thus 7 + 0.000115004 < r + 0.00011542 < 1.000147. For the next inequality we decreased negative terms by inserting t = 0.8 and increased positive terms by inserting 7 = 1.25. The next equality expands the terms. We use upper bound of 1.25 and lower bound of 0.8 to obtain terms with corresponding exponents of T. Consequently, the derivative of 1 (ele) erfc (“*) _ 2¢ HE) erfc (“>*)) (313) with respect to τ is smaller than zero for maximal ν = 0.16.
1706.02515#303
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
304
with respect to τ is smaller than zero for maximal ν = 0.16. Next, we consider the function for the largest v = 0.24 and the largest y = pw = 0.01 for determining the derivative with respect to 7. However we assume 0.9 < 7, in order to restrict the domain of 7. # The expression becomes (4 Too + Too xb 2) 247 + 1 ( Too + 700 ) 2247 , 1 . Dar t r v2 385) orfe | 1007 100 | _ .\ eV a5 erfe | 200 100 : (314) 247 247 v2 Too aes 100 The derivative with respect to 7 is (24741)? (24741)? . 247 +1 mT ( e 00r~ (1927(37 + 25) — 1) erfe | ——-— ] - (ve (evar 29 — net (aye) 26 Stk (1927 (127 + 25) 1)exte (5) + 40v3(72r ~ 1)v7) (4800/r)~ (315)
1706.02515#304
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
306
(48741)? 487 +1 Qe~ a0" (1927 (127 + 25) — 1) erfc + 40V3(727 — 1 < e (1927 (127 5) ) erfc (e5)) (727 VT < 2.911(1927(37 + 25) — 1) Va 2 Ve(2.911—-1)(247+1) , 24741 f 2 10V3V7 ' n (204) + 2.911 2 2.911(1927 (127 + 25) — 1) 2 Vi(2.911-1)(487+1) | m( 487+1 ) + 2.9112 40V3/7 ' 40V3/T 40V3(727 — 1) V7 +32 = Vi ( (1927(37 + 25) — 1) (40V32.911/7) Va(2.911 — 1)(247 +1) + \ (4ov32.911v7)" + (247 +1)? 2(1927 (127 + 25) — 1) (40V32.911\/7) ; Va(2.911 = 1)(487 +1) + Vdove2.911 7)" + (487 +1)? 40V3(727 — 1) V7 +32 = ((avace: —Dyrt 32) (vacon — 1)(247 +1) +
1706.02515#306
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
307
+ (487 +1)? 40V3(727 — 1) V7 +32 = ((avace: —Dyrt 32) (vacon — 1)(247 +1) + y (sovazonye)’ 4 (247 + 0) 0) + 2.911 - 40V3.V7(1927 (37 + 25) — 1)/7 (vavon —1)(487 +1) + (sovia olivz) + 1(487 + 0) - 2/740/32.911(1927(127 + 25) — 1) Vr (vaeon —1)(247 +1) + | (ova. ouivz)” + (247 + 1)? )) (vavon ~ 1)(487 + 1) + (sova2 giyr) + n(48r 4 ((vaemn —1)(247 +1) + y (aovaz.onye)’ + (247 + 0) -1 (vaeon ~ 1)(487 + 1) + | (sovan olivz) + (487 +1) ))
1706.02515#307
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
309
(veem ~1)(48r +1) + y (aovaz.on1yr)’ + (487 + ») - 2/740V/32.911(1927 (127 + 25) — 1) /7 (veem ~1)(24r +1) + y (ovaz.snve)’ + (247 + 0) = — 3.42607 x 10° \/m(247 + 1)? + 40674.8773/7+ 2880V3\/ (247 + 1)? + 40674.87 \/7(487 + 1)? + 40674.8779/2 4 1.72711 x 10° \/n(48r + 1)? + 40674.8779/? — 5.81185 x 10°r3/? — 836198,/7(247 + 1)? + 40674.877°/? + 6114107 (48r + 1)? + 40674.877°/?— 1.67707 x 10°7°/? — 3.44998 x 10777/? + 422935.7? + 5202.68 /7 (247 + 1)? + 40674.877-+ 2601.34/7 (487 + 1)? + 40674.877 + 26433.47 + 415.94\/7 + 480.268,/7 \/m(247 + 1)? + 40674.87 +
1706.02515#309
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
310
+ 1)? + 40674.877 + 26433.47 + 415.94\/7 + 480.268,/7 \/m(247 + 1)? + 40674.87 + 108.389 /7(247 + 1)? + 40674.87 — 592.138 V7 /2(487 + 1)? + 40674.87— 40V3/7 (247 + 1)? + 40674.87 V7 (487 + 1)? + 40674.87 + 32/7 (247 + 1)? + 40674.87 V/7(487 + 1)? + 40674.87 + 108.389 \/7(48r + 1)? + 40674.87 + 367.131 = — 5.81185 x 10°r3/? — 1.67707 x 10°r°/? — 3.44998 x 1077/24 —3.42607 x 10°7*/? — 8361987°/? + 5202.687 + 480.268/7 + 108.389) m (247 + 1)? + 40674.87+ 1.72711 x 10°r/? + 6114107°/? + 2601.347 — 592.138/7 + 108.389) (487 + 1)? + 40674.87-+ 2880V3r3/? — 40V3.V7 + 32) V/n(247 + 1)? + 40674.87
1706.02515#310
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
311
+ 1)? + 40674.87-+ 2880V3r3/? — 40V3.V7 + 32) V/n(247 + 1)? + 40674.87 \/7(487 + 1)? + 40674.87+ 422935.7? + 26433.47 + 415.94\/7 + 367.131 < — 5.81185 x 1073/2 — 1.67707 x 10°7°/? — 3.44998 x 1077/24 —3.42607 x 10°r/? — 8361987°/? + 480.268V1.25 + 1.255202.68 + 108.389) V1(247 + 1)? + 40674.87+ 1.72711 x 10°r3/? 4+ 6114107°/? — 592.138V0.9 + 1.252601.34 + 108.389) (487 + 1)? + 40674.87-+ 2880V37°/? — 40V3V7 + 32) Va(24r + 1)? + 40674.87 \/7 (487 + 1)? + 40674.87+ 229357? 415.94V1.25 1.2526433.4 367.131 ~
1706.02515#311
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
312
229357? + 415.94V1.25 + 1.2526433.4 + 367.131 = — 5.81185 x 10°79/? — 1.67707 x 10°r5/? — 3.44998 x 10777/24 ~ 1.25 + 1.2526433.4 + 367.131 = + 7148.69) m(247 + 1)? + 40674.87-+ + 2798.31) s/7(487 + 1)? + 40674.87+ V/n(247 + 1)? + 40674.87 \/7(487 + 1)? + 40674.87+ −3.42607 × 106τ 3/2 − 836198τ 5/2 + 7148.69 1.72711 × 106τ 3/2 + 611410τ 5/2 + 2798.31 √ 3 √ √ 3τ 3/2 − 40 2880 τ + 32 422935τ 2 + 33874 = 82
1706.02515#312
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
313
— 5.81185 x 1073/2 — 1.67707 x 10°7°/? — 3.44998 x 1077/24 1.72711 x 10°73/? + 6114107°/? + 2798.31) 4/2304x(7 + 5.66103) (7 + 0.0000766694)+ ~3.42607 x 10°r3/? — 8361987°/? + 7148.69) V/576n(7 + 22.561)(7 + 0.0000769518)+ 2880V3r3/? — 40V3.V7 + 32) 23041 (r + 5.66103)(7 + 0.0000766694) /576n(7 + 22.561)(7 + 0.0000769518)+ 229357? + 33874 < — 5.8118510°r?/? — 1.67707 x 10°r°/? — 3.44998 x 1077/24 1.72711 x 10°73/? + 6114107°/? + 2798.31) 923041 1.0001 (7 + 5.66103)7+ ~ 2880V37°/? — 40V3V7 + 32) ¥/230411.0001(7 + 5.66103)7 /57671.0001(7 + 22.561)r+ ~3.42607 x
1706.02515#313
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
314
+ 32) ¥/230411.0001(7 + 5.66103)7 /57671.0001(7 + 22.561)r+ ~3.42607 x 10°r3/? — 8361987°/? + 7148.69) 576m(7 + 22.561)r+ 4229357? + 33874. = — 5.8118510°r?/? — 1.67707 x 10°r°/? — 3.44998 x 1077/24 2 a 0764.79/2 + 1.8055 x 1079/2 4 115823.7) V7 + 5.661037 + 22.561 + 422935.774+ 5.20199 x 10’? + 1.46946 x 10°r? + 238086./7) V7 + 5.66103-+ —3.55709 x 10’ — 1.45741 x 1087? + 304097../r) Vr + 22.561 + 33874. < V1.25 + 5.06103 1.25 + 22.561 (—250764.7° + 1.8055 x 1075/2 4 115823.) + V1.25 + 5.66103 (5.20199 x 1077? + 1.46946 x 10°77 + 238086../7) +
1706.02515#314
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
315
115823.) + V1.25 + 5.66103 (5.20199 x 1077? + 1.46946 x 10°77 + 238086../7) + V0.9 + 22.561 (—3.55709 x 10°r? — 1.45741 x 10°r? + 304097./7) — 5.8118510°r?/? — 1.67707 x 10°r°/? — 3.44998 x 10777/? + 422935.7? + 33874. < 33874.73/? 0.93/2 3.5539 x 10773 — 3.19193 x — 9.02866 x 10°7/? + 2.29933 x 10°r°/? — 3.44998 x 10777/2— 082 4 1.48578 x 10°./r7 ; 2.09884 x L08rV/7 V0.9 0.9 — 5.09079 x 10°r3/? + 2.29933 x 10°79/?— 3.44998 x 1077/2 — 3.5539 x 1077? — 3.19193 x 1087? < 2.29933 x 108./1.2575/? JT 3.5539 x 1077? — 3.19193 x 1087? = — 5.09079 x
1706.02515#315
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
317
First we expanded the term (multiplied it out). The we put the terms multiplied by the same square root into brackets. The next inequality sign stems from inserting the maximal value of 1.25 for 7 for some positive terms and value of 0.9 for negative terms. These terms are then expanded at the =-sign. The next equality factors the terms under the squared root. We decreased the negative term by setting r = 7 + 0.0000769518 under the root. We increased positive terms by setting T + 0.0000769518 = 1.00009627 and 7 + 0.0000766694 = 1.00009627 under the root for positive terms. The positive terms are increase, since 0-8:+0.0000769518 < 1.0000962, thus T + 0.0000766694 < 7 + 0.0000769518 < 1.00009627. For the next inequality we decreased negative terms by inserting 7 = 0.9 and increased positive terms by inserting 7 = 1.25. The next 83 equality expands the terms. We use upper bound of 1.25 and lower bound of 0.9 to obtain terms with corresponding exponents of τ . Consequently, the derivative of # τ
1706.02515#317
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
318
83 equality expands the terms. We use upper bound of 1.25 and lower bound of 0.9 to obtain terms with corresponding exponents of τ . Consequently, the derivative of # τ wut \? wd vr)? (clea) erfc (‘“*) _ 9¢( Gz) erfe re (4) (318) V2V0T V2V0T with respect to 7 is smaller than zero for maximal v = 0.24 and the domain 0.9 < 7 < 1.25. Lemma 47. In the domain —0.01 < y < 0.01 and 0.64 < a < 1.875, the function f(x,y) = 2 (2U+*) erfe =) has a global maximum at y = 0.64 and x = 0.01 and a global minimum at y = 1.875 and x = 0.01. Proof. f (x, y) = e 1 with respect to x is negative: 2 (2y+x) erfc 2x is strictly monotonically decreasing in x, since its derivative
1706.02515#318
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]
1706.02515
319
eo (Vax 3/2¢ = onfe (£4) + V2w-2)) 2/rx3/2 <0 3/2 (etu)? (=) <> Vra’l*e =~ erfe + V2(y—2) <0 vo. Vivi (y— 2) (ew? . c+y Viv! erfe ( ) + viy-2) < Vai ~ Qe°3/2 * =+ yV2-2V2< Fee +4 seu + (Sw ys 2- 06s + 0.01V2 — 0.642 = —0.334658 < 0. (319) 0.01+0.64 , , /(0.01+0.64)? | 4 v2V0.64 | 2-0.64 Ur The two last inqualities come from applying Abramowitz bounds [22] [22] and from the fact that the expression SE +yV2 — v2 does not change monotonicity in the domain and hence ety 4,/ (ty the maximum must be found at the border. For x = 0.64 that maximizes the function f (x, y) is monotonically in y, because its derivative w.r.t. y at x = 0.64 is
1706.02515#319
Self-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
http://arxiv.org/pdf/1706.02515
Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
cs.LG, stat.ML
9 pages (+ 93 pages appendix)
Advances in Neural Information Processing Systems 30 (NIPS 2017)
cs.LG
20170608
20170907
[ { "id": "1504.01716" }, { "id": "1511.07289" }, { "id": "1605.00982" }, { "id": "1607.06450" }, { "id": "1507.06947" } ]