doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1706.02515 | 192 | The derivative â ⵠ˵(µ, Ï, ν, Ï, λ, α) has the sign of Ï.
The derivative â âν ˵(µ, Ï, ν, Ï, λ, α) is positive.
The derivative â âµ Ëξ(µ, Ï, ν, Ï, λ, α) has the sign of Ï.
The derivative â âν Ëξ(µ, Ï, ν, Ï, λ, α) is positive.
# Proof.
â
ⵠ˵(µ, Ï, ν, Ï, λ, α)
(2 â erfc(x) > 0 according to Lemma 21 and ex2 to Lemma 23. Consequently, has â erfc(x) is also larger than zero according ⵠ˵(µ, Ï, ν, Ï, λ, α) the sign of Ï.
51 | 1706.02515#192 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 193 | 51
âν ˵(µ, Ï, ν, Ï, λ, α) Lemma 23 says ex2 erfc(x) is decreasing in µÏ+Î½Ï Î½Ï in Î½Ï since it is proportional to minus one over the squared root of Î½Ï .
we . (negative) is increasing
â jyo Mwtyr _ 1.5-1.2540.1-0.1 c We obtain a lower bound by setting Je > CAV for the e®â erfc(a) -5-1.2540.1-0.1 1 term. The term in brackets is larger than of V2V15-1.25 ) Qo1 erfc (25st) - V2V1.5-1.25
term. The term in brackets is larger than masa â 1) = 0.056
2 Ï0.8·0.8 (α01 â 1) = 0.056 Consequently, the function is larger than zero.
4/
Ëξ(µ, Ï, ν, Ï, λ, α)
â âµ
We consider the sub-function ry w+ur\? (ave âa? (a)
# ry ug | 1706.02515#193 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 194 | â âµ
We consider the sub-function ry w+ur\? (ave âa? (a)
# ry ug
ry w+ur\? w+2ur\? (ave âa? (a) erfc (â*) â e( aH) erfc (â*)) : ug VUT
We set x = v7 and y = jw and obtain
v2 JE- 2 (« (oR) ene (EH) ~ GRAY ete (4) ) . (212)
The derivative of this sub-function with respect to y is
fz) (213) a? (a (2a + y) erfe (234) ~ (a + y) erfe (+= x @etuy? aa J20rya(* (ety) erfe( 2) _ ee ceo =) xr
The inequality follows from Lemma 24, which states that zez2 increasing in z. Therefore the sub-function is increasing in y. erfc(z) is monotonically
The derivative of this sub-function with respect to x is â | 1706.02515#194 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 196 | Jama? (Sa (42? â y?) erfe (24) ee (x â y)(a + y) erfc (+4)) â V2xr°/? a Q/rx? (215) (2xây)(2a+y)2 (x=y)(x@+y)2 â V2x3/2 (a2 â1 ve(s age / (CG) ()') vi atu ( ety er ( ) Vive vive) te Q/mx? 2 ( (2x-y)(2x+y)2(V2Vz) (@y)(e+y)2(v2Vz) 3/2 (2 - â V2x -1 vra (4 Vi (Getut/@rtutie) Ji(etyt tue) v200/? (a? â1) 2 /nx? 2 (2aây)(2e+y)2 (ey) (@+y)2 _ 24 via (x (irtur /@r+y) as) atts) #(o*~1) > V2/n03/?
# Jama?
(2x+y)2 2x
V2xr°/? (a? â 1)
=
52
.
> = | 1706.02515#196 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 197 | a2 D (2%ây)(2a+y)2 (wy) (@+y)2 _¢ Va (2e+u+V/ Gatun) 242(2u+y)+1) Vi (tut (-+y)?+0.782-2(x+y)+0.782? ) V2) nx3/2 0° ( (2a~y)(2x+y)2 (wy) (x+y)2 ) x(a? 1) D vi( (ety t/Qx+y+) 7) ~ Vit ebut/@tut0.782)") VaVra3/? (Qaây)(2e+y)2 (a-y)(e+y)2 ) _ (c? _ 1) VaQQxty) tl) Vr@le+y)-+0.782)) V2 /r03/2 (2(e@+y)+0. Sess y)(Qa+y)2 _ â ovtenner sy) 12) a2 D 7 (2(2a + y) + aT 2a + y) + 0.782) V2/723/2 Vo? (a (a? = 1) (2(2% + y) + 1)(2(a + y) + 0.782) (2(20 + | 1706.02515#197 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 198 | Vo? (a (a? = 1) (2(2% + y) + 1)(2(a + y) + 0.782) (2(20 + y) + 1)(2(@ + y) + 0.782) V2 /r03/? 8x3 + (12y + 2.68657)x? + (y(4y â 6.41452) â 1.40745) a + 1.22072y? (2(2a + y) + 1)(2(a + y) + 0.782) /2,./ra3/? 8x3 + (2.68657 â 120.01)x? + (0.01(â6.41452 â 40.01) â 1.40745)x + 1.22072(0.0)? (2(20 + y) + 1)(2(@ + y) + 0.782) V2 /r03/? 8x? + 2.56657x â 1.472 (2(2e + y) + 1)(2(a + y) + 0.782) V2Va/e 8x? + 2.56657x â 1.472 (2(2ax + y) +.1)(2(@ + y) + 0.782) V2VaVr 8(a + 0.618374) (a â | 1706.02515#198 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 200 | _¢ (a2 _ 1)
=
â First inequality: We applied Lemma 22 two times. â 2 â Equalities factor out â Second inequality part 1: we applied
â
0 < 2y =â (2x + y)2 + 4x + 1 < (2x + y)2 + 2(2x + y) + 1 = (2x + y + 1)2 . (216)
Second inequality part 2: we show that for a = =n (\ / 204841697 â 13) following holds: 8* â (a? + 2a(a + y)) > 0. We have 2 8* â (a? + 2a(x +y)) = 8-2a>0 Da 7 and a 82 â (a? + 2a(a + y)) = â2a > 0. Therefore the minimum is at border for minimal x and maximal y:
holds: 8x and â 8x ây minimal x and maximal y:
2 8 - 0.64 2 / 2048 + 1697 1 / 2048 + 1697 â1 -64 + 0.01) 4 â1e T Al T 7 (0.6 0.01) (3( T *)) (217)
Thus
# Se Tv
Se > a? +2a(x+y). (218) Tv
# (/2setoez 13) | 1706.02515#200 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 201 | Thus
# Se Tv
Se > a? +2a(x+y). (218) Tv
# (/2setoez 13)
# for a = 1 20
â 13
> 0.782.
# Ï
â Equalities only solve square root and factor out the resulting terms (2(2x + y) + 1) and (2(x + y) + 0.782).
â We set α = α01 and multiplied out. Thereafter we also factored out x in the numerator. Finally a quadratic equations was solved.
53
=
= 0 .
The sub-function has its minimal value for minimal « and minimal y x = vt = 0.8 - 0.8 = 0.64 and y = pw = â0.1- 0.1 = â0.01. We further minimize the function rw . pw 9.017, 0.01 uwe 27 (2 âerfc > â0.01e 20-64 | 2 â erfe | ââ . (219 pet (rete (oe) (?-«« (Gam) °»
Ëξ(µ, Ï, ν, Ï, λ, α):
We compute the minimum of the term in brackets of â âµ
We compute the minimum of the term in brackets of ZEu, W,V,T, A, a): | 1706.02515#201 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 202 | We compute the minimum of the term in brackets of â âµ
We compute the minimum of the term in brackets of ZEu, W,V,T, A, a):
p22 pow uwe 2Â¥* {2 âerfc + 220 ( ( V2/ =)) oo)
# µÏe
(â*))) + evr V2Vut 7 )? 2-0.64 â0.01 ) erfe C2") V2/0.64
a2 (- (CB) ene (ââ*) â (REY cote (â*))) + evr > 01 , Va Jor V2Vut 7
(ââ*) â Va Jor 0.64 â 0.01 rfc coe +) V2V0.64 0.01
; ~0. 0.64 â 0.01 20.64â0.01 )? 2-0.64 â0.01 a2 (- (C8) rfc coe +) - el Viv0-64 ) erfe C2") _ o V2V0.64 V2/0.64
2 â¢
0.01
# Tony)?
â
# (saa))
| (
0.012 20.64
â
# 2 â erfc
= 0.0923765 .
+
0.64
0.01e
0.64 | 1706.02515#202 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 204 | We obtain a chain of inequalities:
2ety \? Qn arty \? r ae( 24) ane ( at) _ (a) arte (4) > (222) ViVi ViVi 2-2 2 2 2 Qe+1 a+ + at 4 âlie (24) +2) ve (a+ (#4)'+4) eto tietinty ~ V(aty)?+ =) Vi 1 vive ( ap ea +1+42a+y SRT) Vi 2V2V/z (5 sass ~ Wary Tw. sao! (2V2V/z) (2( ee + â) + 0.782) â (2(2a + y) + 1)) Vi((2(a + y) + 0.782)(2(2x + y) + 1)) (2V2V2) (2y + 0.782 - 2-1) Vi ((2(a + y) + 0.782)(2(22 + y) + 1)) > 0.
2ety 24)
\?
(222)
We explain this chain of inequalities:
â First inequality: We applied Lemma 22 two times. â 2 â Equalities factor out â Second inequality part 1: we applied
â
# x and reformulate. | 1706.02515#204 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 205 | â First inequality: We applied Lemma 22 two times. â 2 â Equalities factor out â Second inequality part 1: we applied
â
# x and reformulate.
0 < 2y =â (2x + y)2 + 4x + 1 < (2x + y)2 + 2(2x + y) + 1 = (2x + y + 1)2 . (223)
54
â Second inequality part 2: we show that for a = 30 (\ / 204841697 â 13) following holds: 8* â (a? + 2a(a + y)) > 0. We have 2 8* â (a? + 2a(x +y)) = 8-2a>0 and i 82 _ (a? + 2a(x + y)) = â2a < 0. Therefore the minimum is at border for minimal x and maximal y:
holds: 8x and â 8x ây minimal x and maximal y:
2 0.64 2 / 2048 + 1697 1 / 2048 + 1697 -ik -64 + 0.01) 4 â1e - Al 7 2) 06 0.01) (3( 7 *)) (224)
8 · 0.64 Ï
Thus
# 8x â Tv
8x â > a? 4+2a(a+y). (225) Tv | 1706.02515#205 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 206 | 8 · 0.64 Ï
Thus
# 8x â Tv
8x â > a? 4+2a(a+y). (225) Tv
for a = 1 20 Ï â 13 > 0.782.
â Equalities only solve square root and factor out the resulting terms (2(2x + y) + 1) and (2(x + y) + 0.782).
We know that (2 â erfc(x) > 0 according to Lemma 21. For the sub-term we derived
ae( AF) erfe (4) _ (He) erfc (<4) >0. (226)
Consequently, both terms in the brackets of â âν Ëξ(µ, Ï, ν, Ï, λ, α) is larger than zero. fore â âν Ëξ(µ, Ï, ν, Ï, λ, α) are larger than zero. ThereLemma 41 (Mean at low variance). The mapping of the mean ˵ (Eq. (4))
ji(u,w,v,7,,a) = > (tc + juw) erfe (=): (227)
aclât © erfe Cac) + [iver + 2) | 1706.02515#206 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 207 | aclât © erfe Cac) + [iver + 2)
in the domain â0.1 < â0.1, -0.1 <w < â0.1, and 0.02 < vt < 0.5 is bounded by
# hhnann
|˵(µ, Ï, ν, Ï, λ01, α01)| < 0.289324 (228)
and
lim νâ0 |˵(µ, Ï, ν, Ï, λ01, α01)| = λµÏ. (229)
We can consider ˵ with given ÂµÏ as a function in x = Î½Ï . We show the graph of this function at the maximal ÂµÏ = 0.01 in the interval x â [0, 1] in Figure A6.
Proof. Since ju is strictly monotonically increasing with puw ji(qe,w,v,7, 4,0) < fu(0.1, 0.1, 0,7, A, a) <
(230) | 1706.02515#207 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 208 | (230)
1 . { 0.01 o.014Yt (on) v2 _ 0.012 =A | â(a@ + 0.01) erfe +ae⢠2 erfe | ââââ ]} + 4/âVvte @ +2-0.01] 2 ( ( ) (34) V2 vt T
{ 0.01 +ae⢠(34) 0.02 + 0.01 erfe Ca) yas
(on) | ââââ ]} + V2 vt 0.01 0.01) erfe (a0 FV ere \ a gas
1 0.05 0.02 + 0.01 0.01 _0.012 =X e 2 +9 la erfe Ca) â (ao1 + 0.01) erfe (a0 Jee 20.5 an 0.01 -2 a ( mente yas ) (om FV ere \ a gas voy < 0.21857,
< 0.21857,
< ~
where we have used the monotonicity of the terms in Î½Ï .
55
= 0 . | 1706.02515#208 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 209 | Figure A6: The graph of function ˵ for low variances x = Î½Ï for ÂµÏ = 0.01, where x â [0, 3], is displayed in yellow. Lower and upper bounds based on the Abramowitz bounds (Lemma 22) are displayed in green and blue, respectively.
Similarly, we can use the monotonicity of the terms in Î½Ï to show that
fi(u,w,v,7,A, a) > fi(0.1, â0.1,v,7, A, a) > â0.289324, (231)
such that |˵| < 0.289324 at low variances.
Furthermore, when (Î½Ï ) â 0, the terms with the arguments of the complementary error functions erfc and the exponential function go to inï¬nity, therefore these three terms converge to zero. Hence, the remaining terms are only 2ÂµÏ 1 | 1706.02515#209 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 210 | Lemma 42 (Bounds on derivatives of ji in Q~). The derivatives of the function ji(f, w,V,7, 01, 001 (Eq. (4) with respect to [4,w,v,T in the domain Q~ = {p1,w,v,T| â0.1< pw <0.1,-0.1<w< 0.1,0.05 < v < 0.24,0.8 < r < 1.25} can be bounded as follows:
# ft}
â âµ â âÏ â âν â âÏ
< 0.14 (232)
< 0.14
< 0.52
< 0.11.
Proof. The expression
0. 1 = (uw)? (uw)? (On plus ) (uwter)? (â + â)) =f = Ji = sAwe 2 | Qe 2 â e 2 erfe + aeâ 27 ~ erfc | ââââ dp an) ( (A V2/0T (233) | 1706.02515#210 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 211 | contains the terms e which are monotonically de- creasing in their arguments (Lemma 23). We can therefore obtain their minima and maximal at the minimal and maximal arguments. Since the ï¬rst term has a negative sign in the expression, both terms reach their maximal value at ÂµÏ = â0.01, ν = 0.05, and Ï = 0.8.
(a) 1 ; rl < 5 |Al | (2 = 620989 erfe (0.0853553) + ae erfe (0.106066) ) | < 0.133 (a
Since, ˵ is symmetric in µ and Ï, these bounds also hold for the derivate to Ï.
56
(234)
h(x) 0.005 0.004 0.003 0.002 0.001 0.009 x 0.0 0.2 0.4 0.6 0.8 1.0
Figure A7: The graph of the function h(x) = ˵2(0.1, â0.1, x, 1, λ01, α01) is displayed. It has a local maximum at x = Î½Ï â 0.187342 and h(x) â 0.00451457 in the domain x â [0, 1]. | 1706.02515#211 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 212 | We use the argumentation that the term with the error function is monotonically decreasing (Lemma 23) again for the expression
(a) âfa=SJ2= 235 apt Si2 (235) 1 â 20? (uwter)? ffl + VT 2 = =)
e = [ae fo ( |) (a 1),/ | < pre? (~ zi erte (4 â) (a-1) a )< 1 ier (|1.1072 â 2.68593]) < 0.52.
wtvr)? We have used that the term 1.1072 < ageâ a7 erfe (4g) < 1.49042 and the term 0.942286 < (a â Wes < 2.68593. Since ji is symmetric in v and 7, we only have to chance outermost term | +Ar| to | Av to obtain the estimate | 2 ji| < 0.11.
Lemma 43 (Tight bound on ˵2 in â¦â). The function ˵2(µ, Ï, ν, Ï, λ01, α01) (Eq. (4)) is bounded by
|f?|
|f?| < 0.005 (236) | 1706.02515#212 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 213 | |f?|
|f?| < 0.005 (236)
in the domain Q~ = {p1,w,v,7| âO1<w<0.1,-0.1 <w <0.1,0.05 <v <0.24,08<7< 1.25}.
We visualize the function ˵2 at its maximal µν = â0.01 and for x = Î½Ï in the form h(x) = ˵2(0.1, â0.1, x, 1, λ01, α01) in Figure A7.
Proof. We use a similar strategy to the one we have used to show the bound on the singular value (Lemmata 10, 11, and 12), where we evaluted the function on a grid and used bounds on the derivatives together with the mean value theorem. Here we have
|? (1, w, v,T,Ao1, 01) â fi? (u + Ap, w + Aw,y + Av,7 4 Ar, Ao1,.001)| < (238) 0 .5 O Oo. ji?| |Ap| 4 ji} |Aw| 4 ji?| |Av| 4 ji?| |Ar. [Sa ised + | 27] kl + || fan + | a | 1706.02515#213 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 214 | We use Lemma 42 and Lemma 41, to obtain
~ j2| = 2i|| ja] < 2- 0.289324 - 0.14 = 0.08101072 (239) Ou Ou O 5 .,| oO. . =f] = 2|a||â fi] < 2- 0.289324 - 0.14 = 0.08101072 Ow Ow
57
(235)
(237)
o â ji? = 2|f|| =f] < 2- 0.289324 - 0.52 = 0.30089696 Ov al (a) Fr ~ ~ 4 Lac = 2|ja|| fil < 2 - 0.289324 - 0.11 = 0.06365128 ar" Or
We evaluated the function ˵2 in a grid G of â¦â with âµ = 0.001498041, âÏ = 0.001498041, âν = 0.0004033190, and âÏ = 0.0019065994 using a computer and obtained the maximal value maxG(˵)2 = 0.00451457, therefore the maximal value of ˵2 is bounded by | 1706.02515#214 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 215 | max (µ,Ï,ν,Ï )ââ¦â 0.00451457 + 0.001498041 · 0.08101072 + 0.001498041 · 0.08101072+ 0.0004033190 · 0.30089696 + 0.0019065994 · 0.06365128 < 0.005.
Furthermore we used error propagation to estimate the numerical error on the function evaluation. Using the error propagation rules derived in Subsection[A3.4.5] we found that the numerical error is smaller than 10~13 in the worst case. Lemma 44 (Main subfunction). For 1.2 <x < 20and-0.1<y<0.1,
the function
(ety? , faty (22+)? 2e+y e 2 erfc 2e~ 2 â erfc (= 4) (242) (= 2/x L) - 2x
is smaller than zero, is strictly monotonically increasing in x, and strictly monotonically decreasing in y for the minimal x = 12/10 = 1.2.
Proof. We ï¬rst consider the derivative of sub-function Eq. (101) with respect to x. The derivative of the function | 1706.02515#215 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 216 | Proof. We ï¬rst consider the derivative of sub-function Eq. (101) with respect to x. The derivative of the function
tiv? , faty \- @zpy? (= + 7) e 2 erfe 2e~ 2 â erfc (243) (= 2x v2Ve
with respect to x is â
vi (es (ty uu) ) + V3Vz(30 â y) *(« ây)(a+y) erfe (424) 2 (4a? â y?) erfe (= 2/mx?
(244) vi (e a (a â y)(@ + y) erfc (4 al = (0 + y)(2% â y) erfe (2+) + V2\/z(3x â a ty)? (ety)? e (aâ w)(ety) erfe( S42) 2c (2a+y)(2xây) erfe( 2a) - ve Vivi Vivi + (Bx ây) 22 /ra2 Jt
We consider the numerator
(tw)? (224 ~ (244 Vi ee (7 wa + wert (FH) 20 om (20 + y)( Qa â y)erte (2542) - (80-4) Vv2vE VaVE ne (245)
â
For bounding this value, we use the approximation | 1706.02515#216 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 217 | â
For bounding this value, we use the approximation
ez2 erfc(z) â â 2.911 â Ï(2.911 â 1)z + Ïz2 + 2.9112 . (246)
58
240 (240)
(241)
=
x(3x â y)
(245)
=
from Ren and MacKenzie [30]. We start with an error analysis of this approximation. According to Ren and MacKenzie [30] (Figure 1), the approximation error is positive in the range [0.7, 3.2]. This range contains all possible arguments of erfc that we consider. Numerically we maximized and minimized the approximation error of the whole expression
oe (a â y)(@ + y) erfc (4%) | 2 (2x â y)(2x + y) erfc Vive V2Ve 2.911(x â y)(a@+y) (V2Vz) (senguen + fo (=)" 201) 2-2.911(2x â y) (2x + y) (V2Vz) (sesygyes n (2eun)â zon)
E(x, y) =
(23)
â
2
# x
â
(247) | 1706.02515#217 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 218 | E(x, y) =
(23)
â
2
# x
â
(247)
We numerically determined 0.0113556 < E(x,y) < 0.0169551 for 1.2 < x < 20 and -0.1 < y <0.1. We used different numerical optimization techniques like gradient t based constraint BFGS algorithms and non-gradient-based Nelder-Mead methods with different start points. Therefore our approximation is smaller than the function that we approximate. We subtract an additional safety gap of 0.0131259 from our approximation to ensure that the inequality via the approximation holds true. With this safety gap the inequality would hold true even for negative x, where the approximation error becomes negative and the safety gap would compensate. Of course, the safety gap of 0.0131259 is not necessary for our analysis but may help or future investigations.
We have the sequences of inequalities using the approximation of Ren and MacKenzie [30]:
(x+y)2 2x
(2x+y)2 2x | 1706.02515#218 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 219 | We have the sequences of inequalities using the approximation of Ren and MacKenzie [30]:
(x+y)2 2x
(2x+y)2 2x
; . aoe (x â y)(x + y) erfc (=) 2¢ (2a â y)(2% + y) erfe (24) (3a â y) 4 Viva â Vive (30 ây) 4 2.911(a â y)(a@+y) _ (\ (se) + 2.9112 4 sues (v2Vz) 2(2x â y)(2a + y)2.911 Vm â0.0131259 = 2 (30 ây) 4 (V2V/#2.911) (w â y)(w« +y) _ ( ma + y)? +2- 2.91122 + (2.911 â 1)(a + wv7) (v2V/z) 2(2x â y) (2a + y) (V2Vr2.911) Jy â 00131259 = (V2Vz) ( w(Qa + y)? +2- 2.91 12x + (2.911 â 1)(2a + vv)
# e
# (x â y)(x + y) erfc
# 2e
â
â
# x
(248)
â | 1706.02515#219 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 221 | (3x ây) +2.911 (w= w(@ +9) (2.911 â La ty) + (ety)? + 225s 7 2(2x â y)(2x + y) (2.911 â1)(22 +-y) 4 (ee + y)? + 220122 T â 0.0131259 > (3a â y) + 2.911 (w= y)(e+y) (2.911 â1)(~+y)4 Jes) + (x+y)? 4 22.01)? » 2:2.9112y T 2(2x â y)(2x + y) (2.911 â1)(22 +-y) 4 (Qe + y)? + 220lPe T â 0.0131259 = (3a â y) + 2.911 (@= (e+) - (2.911-D(ety)t+ Vf (@ty+ 2.911? y? 2(2x â y)(2x + y) (2.911 â1)(22 +-y) 4 Ver + y)? + 220172 â 0.0131259 = (3a â y) + 2.911 (e-wety) 2(2x â y)(2a + y) 0.0131259 2.911 (x + y) + 29 | 1706.02515#221 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 222 | â y) + 2.911 (e-wety) 2(2x â y)(2a + y) 0.0131259 2.911 (x + y) + 29 (2.911 â1)(22 + y) + \/ (2x + y)? + 220s . xâyj(aty 2(2a â y)(2a + y)2.911 (3a â y) 4 eos ry) _ M âââ - 0.0131259 = TTY ST (2.911 â1)(2r +y) + y/ (2a + y)? + 22212 2.911 (222-9 2.911 ( y+ 22 ) eer ty) 4 T 2.911 2- 2.91122 me) (3a â y â 0.0131259) (em (Qe+y)+4/Qe+y)?24 :) : (x ây)(a +y) (em (Qr+y)+4/ Qr+y)? 4 2uire)) TT - â1 (Gc y+) (em 1)(22 + y) + yf (2x + y)? azure) = (( (x â y)(« + y) + (3x â y â 0.0131259)(x + y + 0.9266)) (\/Qx + y)? + | 1706.02515#222 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 224 | â 0.0131259 =
(249)
5.822(2a â y)(x + y + 0.9266) (2a + y))
> -1 ((« ty) + 2) (em 1)(2a + y) + 4/ (2a + y)? zars)) > 0.
We explain this sequence of inequalities:
⢠First inequality: The approximation of Ren and MacKenzie [30] and then subtracting a safety gap (which would not be necessary for the current analysis).
â
â 2
⢠Equalities: The factor x is factored out and canceled.
⢠Second inequality: adds a positive term in the ï¬rst root to obtain a binomial form. The term containing the root is positive and the root is in the denominator, therefore the whole term becomes smaller.
60
â
(3x â y) +
(x â y)(x + y)
(x + y) + 2.911
# Ï
â
2(2x â y)(2x + y)2.911
(2.911 â 1)(2x + y) +
(2x + y)2 + 2·2.9112x
# Ï
â 0.0131259 =
⢠Equalities: solve for the term and factor out. | 1706.02515#224 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 225 | (2x + y)2 + 2·2.9112x
# Ï
â 0.0131259 =
⢠Equalities: solve for the term and factor out.
)2 L 2-2.9112a t Bringing all terms to the denominator (( + y) + 2-244) (ou -1Qr+y)+VQrt+y T ).
Equalities: Multiplying out and expanding terms.
⢠Last inequality > 0 is proofed in the following sequence of inequalities.
We look at the numerator of the last expression of Eq. (248), which we show to be positive in order to show > 0 in Eq. (248). The numerator is
(Vee + y)? + 5.39467x + 3.8222 4 Lolly) | 1706.02515#225 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 226 | ((x â y)(a@ + y) + (8a â y â 0.0131259)(a + y + 0.9266)) (Vee + y)? + 5.39467x + 3.8222 4 5.822(2x â y)(x + y + 0.9266) (2% + y) = â 5.822(2x â y)(a + y + 0.9266) (2a + y) + (3.822% + 1.911y)((a â y)(a + y)+ (3a â y â 0.0131259)(a + y + 0.9266)) + ((% â y)(a+y)+4 (3a â y â 0.0131259)(a + y + 0.9266))/ Qa + y)? + 5.394672 = â 8.023 + (4a? + 2xy + 2.76667x â 2y? â 0.939726y â 0.0121625) \/(2x + y)? 4 (250) + 5.39467xâ 8.0x?y â 11.0044? + 2.0ry? + 1.69548ary â 0.0464849x + 2.0y? + 3.59885y7 â 0.0232425y = â | 1706.02515#226 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 228 | The factor in front of the root is positive. If the term, that does not contain the root, was positive, then the whole expression would be positive and we would have proofed that the numerator is positive. Therefore we consider the case that the term, that does not contain the root, is negative. The term that contains the root must be larger than the other term in absolute values. â (-8.02° â 8.02°y â 11.0044x? + 2.cy? + 1.69548ay â 0.0464849x + 2.1% + 3.59885y" â 0.0232425y) <
â (-8.02° â 8.02°y â 11.0044x? + 2.cy? + 1.69548ay â 0.0464849x + 2.1% + 3.59885y" â 0.0232425y) < (251)
(251) | 1706.02515#228 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 229 | (251)
(4a? + 2ay + 2.76667x â 2y? â 0.939726y â 0.0121625) \/(2x + y)? + 5.39467x Therefore the squares of the root term have to be larger than the square of the other term to show > 0 in Eq. (248). Thus, we have the inequality: (â8.02 â 8.02?y â 11.0044a? + 2.ay? + 1.69548xy â 0.04648492a + 2.y? + 3.59885y? â 0.0232425y)â
(252)
(4x? + 2ny + 2.766672 â 2y? â 0.939726y â 0.0121625)â (2x + y)? +5.394672) .
This is equivalent to 0 < (4a? + 2ay + 2.76667" â 2y? â 0.939726y â 0.0121625)â ((2a + y)? +.5.394672) â
(253) | 1706.02515#229 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 230 | (253)
(â8.02° â 8.02?y â 11.0044a? + 2.0ay? + 1.695482y â 0.04648492x + 2.0y? + 3.59885y? â 0.0232425y)â â 1.2227a° + 40.1006aty + 27.7897a4 + 41.0176 y? + 64.5799a%y + 39.4762a° + 10.9422a7y>â 13.543a7y? â 28.845527y â 0.364625a? + 0.611352ay* + 6.83183ay? + 5.46393ry?+ 0.121746xy + 0.000798008a â 10.6365y° â 11.927y* + 0.190151y? â 0.000392287y? .
We obtain the inequalities: â 1.2227x5 + 40.1006x4y + 27.7897x4 + 41.0176x3y2 + 64.5799x3y + 39.4762x3 + 10.9422x2y3â
(254) | 1706.02515#230 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 232 | â 1.2227x° + 27.7897a7 + 41.0176x°%y? + 39.4762x°° â 13.543x7y? â 0.364625x?+ y (40.10062* + 64.5799a° + 10.942227y? â 28.8455a? + 6.831832y? + 0.1217462 â 10.6365y* + 0.190151yâ) + 0.611352xry* + 5.46393xy? + 0.0007980082 â 11.927y* â 0.000392287y" > â 1.22272" + 27.78972* + 41.0176 - (0.0)?2* + 39.4762x°° â 13.543 - (0.1)?2? â 0.364625x? â 0.1 - (40.10062* + 64.5799x* + 10.9422 - (0.1)?a? â 28.8455x? + 6.83183 - (0.1)?a + 0.121746 + 10.6365 - (0.1)* + 0.190151 - (0.1)?) + 0.611352 - (0.0)4a + | 1706.02515#232 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 234 | We used 24.7796 - (20)* â 1.2227 - (20)® = 52090.9 > 0 and a < 20. We have proofed the last inequality > 0 of Eq. (248).
Consequently the derivative is always positive independent of y, thus
(etv)? â, (aty (22+y)? 2) e 2 erfe â2e7 2 ~ erfe 255 (4) (St eo)
is strictly monotonically increasing in x.
The main subfunction is smaller than zero. Next we show that the sub-function Eq. (101) is smaller than zero. We consider the limit:
. (ety)? a+y @ety)? | (Qr4+ â) lim e 2 ~ erfe â 2e7 2= ~ erfc =~) =0 256 e090 ( V2Vz ) ( V2Vz °)
The limit follows from Lemma 22. Since the function is monotonic increasing in x, it has to approach 0 from below. Thus,
zty)? c ety)? 2a e 3 erfe (34) â 26 erfe A) (257)
is smaller than zero. | 1706.02515#234 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 235 | zty)? c ety)? 2a e 3 erfe (34) â 26 erfe A) (257)
is smaller than zero.
Behavior of the main subfunction with respect to y at minimal x. We now consider the deriva- tive of sub-function Eq. (101) with respect to y. We proofed that sub-function Eq. (101) is strictly monotonically increasing independent of y. In the proof of Theorem 16, we need the minimum of sub-function Eq. (101). Therefore we are only interested in the derivative of sub-function Eq. (101) with respect to y for the minimum x = 12/10 = 1.2
Consequently, we insert the minimum x = 12/10 = 1.2 into the sub-function Eq. (101). The main terms become
â 1.2 â 2 x + y â â x 2 y + 1.2 â â 1.2 2 y â 5y + 6 â 15 2 â = = + = 2 1.2 (258)
and
2x + y â â x 2 = y + 1.2 · 2 â â 1.2 2 = â y â 2 1.2 + â 1.2 â 2 = 5y + 12 â 15 2 . (259) | 1706.02515#235 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 236 | Sub-function Eq. (101) becomes:
an) 8) aim) 2 vet to Yi tVv2,/2 la BO erfc y + VO | oe \ VR vi erfc y + va?
.
(260)
62
The derivative of this function with respect to y is
â
2V15 VI5 VI5m (c2r(v+)*(5y + 6) erfe (BEE) â 2ear(u+12)* (5y + 12) erfe (S42) ) + 30 (261) 6V 157
We again will use the approximation of Ren and MacKenzie [30]
ez2 erfc(z) = â 2.911 â Ï(2.911 â 1)z + Ïz2 + 2.9112 . (262)
Therefore we ï¬rst perform an error analysis. We estimated the maximum and minimum of
â
â | 1706.02515#236 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 237 | Therefore we ï¬rst perform an error analysis. We estimated the maximum and minimum of
â
â
2-2.911(5y + 12 2.911(5y +6 Vi50 o11(5y +12) - - 911(5y +6) : Vi(2.911-1)(5y+12) , syt12\* , 2 Vm(2.911-1)(5y+6) | sy+6 , 9 1 (3 2) + 2.911 1) m (248) + 2.911 (263) 5y +6 5 _ (Sy +12 V150 (cory + 6) erfe (2 + ) - Jens (u+12)" (59) + 12) erfe (4 )) + 30. 2/15 2V15
+ 30 +
We obtained for the maximal absolute error the value 0.163052. We added an approximation error of 0.2 to the approximation of the derivative. Since we want to show that the approximation upper bounds the true expression, the addition of the approximation error is required here. We get a sequence of inequalities:
5 6 5 _ {5 12 V150 (cory + 6) erfe (2 + ) - Jens (u+12)" (59) + 12) erfe (% + )) + 30. < 2/15 15
â | 1706.02515#237 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 238 | 2/15 15 (264) Jibn 2.911(5y + 6) _ 2-2.911(5y + 12) 2 2 Vm(2.911-1)(5y+6) , / (5y+6\~ 4 2 vR(2.911-1)(5y+12) Syt12)" | OTE 7(3 is) + 2.911 we | 7(2 2) + 2.911 30+0.2 = (30 - 2.911)(5y + 6) _ 2(30 - 2.911)(5y + 12) . . 2 (2.911 â 1)(5y + 6) 4 ou + 6)2 4 (2452011) (2.911 â 1)(5y + 12) 4 eu +12)? 4 30+0.2 = 2 (0.2 + 30) | (2.911 â 1)(5y + 12) + | (5y +12)? 4 (ae 2) Vi (2.911 â 1)(5y + 6) + 4| (5y + 6)? 4 2 (Ae) Wa 2 2-30-2.911(5y +12) | (2.911 â 1)(5y +6) 4 | (5y + 6)? 4 (=) | 2/15 - 2.911 ; 2.911 | 1706.02515#238 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 240 | + 2.9112
+
â
(24520)
15·2.911 â Ï
2
+
63
2 2/15 - zu) (2.911 â 1)(5y + 6) + 4| (5y + 6)? 4 ( Va -1 <0. 2 2V15 - 2.911 Vi (2.911 â 1)(5y + 12) + 4} (5y + 12)? 4 (
We explain this sequence of inequalities.
⢠First inequality: The approximation of Ren and MacKenzie [30] and then adding the error bound to ensure that the approximation is larger than the true value.
â
â
⢠First equality: The factor 2 15 and 2 Ï are factored out and canceled.
⢠Second equality: Bringing all terms to the denominator
2 52.9 we) (265) (2.911 â 1)(5y + 6) + 4| (5y + 6)? 4 ( Ti 2 2V15 - 2.911 Vi (2.911 â 1)(5y + 12) + 4} (5y + 12)? 4 (
⢠Last inequality < 0 is proofed in the following sequence of inequalities.
We look at the numerator of the last term in Eq. (264). We have to proof that this numerator is smaller than zero in order to proof the last inequality of Eq. (264). The numerator is | 1706.02515#240 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 241 | 2 2/15 - 2.911 VI5-2.9 ) (266) (0.2 + 30) | (2.911 â 1)(5y + 12) + 4} (5y + 12)? 4 ( Vi 2 2V15 - 2.911 _ Vi (2.911 â 1)(5y + 6) + ,| (5y + 6)? 4 ( 2 eo) 2-30 -2.911(5y + 12) | (2.911 â 1)(5y + 6) + 4} (5y + 6)? 4 ( Vi 2 2V15 ze) 2.911 - 30(5y + 6) | (2.911 â 1)(5y + 12) + | (Sy +12)? 4 ( Vi
We now compute upper bounds for this numerator:
(267) 2 2/15 - 2.911 Vr (0.2 + 30) | (2.911 â 1)(5y + 12) + | (5y + 12)? 4 ( 2 2,15 - a) (2.911 â 1)(5y + 6) + ,| (5y + 6)? 4 ( Vi
64
(266) | 1706.02515#241 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 242 | 2 eo) 2-30 -2.911(5y + 12) | (2.911 â 1)(5y + 6) + 4} (5y + 6)? 4 ( Vi 5 2.911 - 30(5y +6) { (2.911 â 1)(5y + 12) + «| (5y +12)? 4 (=) â 1414.99? â 584.739 \/(5y + 6)? + 161.84y + 725.211 \/(5y + 12)? + 161.84yâ 5093.97y â 1403.37,\/ (Sy + 6)? + 161.84 + 30.2\/(5y + 6)? + 161.84,/(5y + 12)? + 161.844 870.253\/(5y + 12)? + 161.84 â 4075.17 < â 1414.99? â 584.739 \/(5y + 6)? + 161.84y + 725.211 \/(5y + 12)? + 161.84yâ 5093.97y â mw 6 +5-(â0.1))? + 161.84 + 30.2\/(6 + 5 - 0.1)? + 161.84,/(12 + 5 - 0.1)? + 161.844 870.253,/(12 +5 = + 161.84 â | 1706.02515#242 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 244 | 4} (5y + 6)? 4
For the ï¬rst inequality we choose y in the roots, so that positive terms maximally increase and negative terms maximally decrease. The second inequality just removed the y2 term which is always negative, therefore increased the expression. For the last inequality, the term in brackets is negative for all settings of y. Therefore we make the brackets as negative as possible and make the whole term positive by multiplying with y = â0.1.
Consequently
iv? , (aty (2e+y)? 2) e 2 erfe | â-â ] â 2eâ 2 ~ erfe | ââ 268 (4) (St ees)
is strictly monotonically decreasing in y for the minimal x = 1.2. Lemma 45 (Main subfunction below). For 0.007 < x < 0.875 and â0.01 < y < 0.01, the function
iv? , (aty (2e+y)? mea) e 2 erfe â2e7 2 ~ erfe | ââ 269 (258) (at om | 1706.02515#244 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 245 | iv? , (aty (2e+y)? mea) e 2 erfe â2e7 2 ~ erfe | ââ 269 (258) (at om
smaller than zero, is strictly monotonically increasing in x and strictly monotonically increasing in y for the minimal x = 0.007 = 0.00875 · 0.8, x = 0.56 = 0.7 · 0.8, x = 0.128 = 0.16 · 0.8, and x = 0.216 = 0.24 · 0.9 (lower bound of 0.9 on Ï ).
Proof. We ï¬rst consider the derivative of sub-function Eq. (111) with respect to x. The derivative of the function
(ety)? wt) (22+y)? CG) e 2 erfc â2e =~ erfe (270) (Se Vive
with respect to x is
(ety? (22+y)? 6 2 vi (es * (eây)(u + y) erfe (HHL) â 2⬠oe (4a? â y?) exfe (254 ut) ) + VBVz(30 â y) 2/mx?
â
(271) â 2 | 1706.02515#245 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 246 | â
(271) â 2
a) â Qe a = on + y)(2x â y) erfe (34 2 /rx? vi (eae (x â y)( x+y)erte ($¢ +L) ) + v2val 3x â y) ae
â
65
=
=
oH a w(2ty)erfe( SH) ne aeâ (Qrty)Qxâyerte(2ee) \ ve Vivi Vivi + (Bx ây) V22/rSrx?
â
We consider the numerator
(ety)? : (2x41 . et Vi e (x â y)(x +y)erfe (+) 7 20a = (20 + y)(2x â y) erfe (24) - (80-4) Vv2vE VaVE ne (272)
â
For bounding this value, we use the approximation
ez2 erfc(z) â â 2.911 â Ï(2.911 â 1)z + Ïz2 + 2.9112 . (273) | 1706.02515#246 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 247 | ez2 erfc(z) â â 2.911 â Ï(2.911 â 1)z + Ïz2 + 2.9112 . (273)
from Ren and MacKenzie [30]. We start with an error analysis of this approximation. According to Ren and MacKenzie (Figure 1), the approximation error is both positive and negative in the range [0.175, 1.33]. This range contains all possible arguments of erfc that we consider in this subsection. Numerically we maximized and minimized the approximation error of the whole expression (ety? aty (2e4y)? E _ ( 2a e (x â y)(a + y) erfc (#4) 2e (2a â y)(2a + y) erfc (234)
(ety? aty (2e4y)? E _ ( 2a Bley) = e (x â y)(a + y) erfc (#4) - 2e (2a â y)(2a + y) erfc (234) Vive Ve
â | 1706.02515#247 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 248 | â
2.911(x â y)(a@+y) 2 viva ( vraouâiety) ; m (4) r20u1) 2-2.911(2x â y) (2x + y) Ve(2.911-1)(2e+y) | 2ety \* | 2 viva ( JET m (25x) r20u1)
(274)
We numerically determined â0.000228141 < E(a,y) < 0.00495688 for 0.08 < a < 0.875 and â0.01 < y < 0.01. We used different numerical optimization techniques like trradient based constraint BFGS algorithms and non-gradient-based Nelder-Mead methods with different start points. Therefore our approximation is smaller than the function that we approximate.
We use an error gap of â0.0003 to countermand the error due to the approximation. We have the sequences of inequalities using the approximation of Ren and MacKenzie [30]:
(3x â y) +
# e
(x+y)2 2x
# . (EH
# y)erte (%)
# (x â y)(x + y) erfc
â
2
# x
â
â
2
# x
â
# 2e
(2x+y)2 2x
(2x â y)(2x + y) erfc â 2
â | 1706.02515#248 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 250 | 2(2x â y)(2a + y)2.911 2 Qaty : _ (2.911-1) VF(20-+y) (v2vz) (V=(2) + 2.9112 4 Re ) Vm â 0.0003 = (30 ây) 4 (V2V/#2.911) (w â y)(w« +y) _ ( n(x + yy? +2- 2.91122 + (2.911 â 1)(@ + wv7) (Vv2Vz) 2(2x â y) (2a + y) (V2Vr2.911) (V2Vz) ( TQe+ yj? +2- 291120 + (2.911 â 1)(2x + y)v7) (3a â y) +2.911 (c«-y)\(@+y) _ (2.911 â1)(a@+y)+/(a+y)? + 222 7 2(2x â 22 Cr=wCrty | _ 9.0903 5 (2.911 â 1)(2% + y) 4 (Qe fy)? + 2201122 T Vix â 0.0003 (32 â y) +2.911 (x= y)(@ +) (2.911 â1)(@+y)4 | 1706.02515#250 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 251 | + 2201122 T Vix â 0.0003 (32 â y) +2.911 (x= y)(@ +) (2.911 â1)(@+y)4 Jee) + (a fy)? + Zeolite 4 2-2.9112y ' Tw 220 y)@rty) |) _ n.q903 = (2.911 â 1)(22+y) 4 (ex | y)? + 22anite (32 ây) 4 vm (= y)(@+y) _ (2.911 â1)(a+y)+ (wt y+ zg)? 2(2x â y) (2a Qe-wr+y) | _ 9.9903 = (2.911 â 1)(22+y) 4 (ex | y)? + 22onlte (3 â y) + 2.911 (e=w(e+y) 2(2a â y)(2x + y) 2.911 (x + y) + 2212 (2.911 â1)(22 + y) + \/ (2x + y)? + 220e ; _ (c-y)(uty) 2(2x â y)(2x + y)2.911 : (Bx) + Cy Bam âââ ~ 0.0003 TT Y)F Me (2.911 â1)(Qa + y) | 1706.02515#251 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 254 | (â82° 8a2y + 4x? / (2x + y)? + 5.39467x â 10.9554? + 2ay? â 2y?\/(Qx + y)? + 5.394672 + 1.76901ay + Qayy/(2x + y)? + 5.394672 + 2.77952\/ (2x + y)? + 5.394672 â 0.9269y\/ (2x + y)? + 5.39467a â 0.00027798\/ (2x + y)? + 5.39467a â 0.00106244x + 2y? + 3.62336y? â 0.00053122y) - -1 ((« ty) 4 =) (em I(2e +y) + 4/ (Qe +y)24 vanes) (â82° + (4x? + 2xy + 2.77952 â 2y? â 0.9269y â 0.00027798) \/(2x + y)? + 5.39467a â 8a°y â 10.9554a? + 2ey? + 1.7690 Ly â 0.001062442 + 2y* + 3.62336y? â 0.00053122y) - -1 ((« ty) 4 =) (em | 1706.02515#254 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 256 | ⢠First inequality: The approximation of Ren and MacKenzie [30] and then subtracting an error gap of 0.0003.
â
â 2
⢠Equalities: The factor x is factored out and canceled.
⢠Second inequality: adds a positive term in the ï¬rst root to obtain a binomial form. The term containing the root is positive and the root is in the denominator, therefore the whole term becomes smaller.
Equalities: solve for the term and factor out.
e Bringing all terms to the denominator ((x + y) + 2911) (ou â1)Qa+y)+/(Qr+y)? 4 sage),
⢠Equalities: Multiplying out and expanding terms.
⢠Last inequality > 0 is proofed in the following sequence of inequalities.
We look at the numerator of the last expression of Eq. (275), which we show to be positive in order to show > 0 in Eq. (275). The numerator is
82° + (42? + 2ey + 2.77952 â 2y? â 0.9269y â 0.00027798) 2x + y)? + 5.394672 â (276) | 1706.02515#256 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 258 | The factor 4x? + 2ry + 2.7795a â 2y? â 0.9269y â 0.00027798 in front of the root is positive:
da? + 2ay + 2.77952 â 2yâ â 0.9269y â 0.00027798 > (277) â2y? + 0.007 - 2y â 0.9269y + 4 - 0.007? + 2.7795 - 0.007 â 0.00027798 = â2y? â 0.9129y + 2.77942 = â2(y + 1.42897)(y â 0.972523) >0. If the term that does not contain the root would be positive, then everything is positive and we have proofed the the numerator is positive. Therefore we consider the case that the term that does not contain the root is negative. The term that contains the root must be larger than the other term in absolute values. â (-827 â 8a?y â 10.9554x? + 2xyâ + 1.76901 xy â 0.001062442 + 2y° + 3.62336yâ â 0.00053122y) <
(277) | 1706.02515#258 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 259 | (277)
â (-827 â 8a?y â 10.9554x? + 2xyâ + 1.76901 xy â 0.001062442 + 2y° + 3.62336yâ â 0.00053122y) < (278)
(278)
(4a? + Qey + 2.7795a â 2y? â 0.9269y â 0.00027798) V/(2a + y)? + 5.394672 . Therefore the squares of the root term have to be larger than the square of the other term to show > 0 in Eq. (275). Thus, we have the inequality: (â82° â 82?y â 10.9554xâ + 2axy + 1.76901xy â 0001062442 + 2y° + 3.62336y? â 0.00053122y)â
<
(279)
68
.
(4a? + 2ay + 2.7795a â 2y? â 0.9269y â 0.00027798)â ((2x + y)? + 5.394672) .
This is equivalent to | 1706.02515#259 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 260 | This is equivalent to
0 < (42? + 2xy + 2.77952 â 2y? â 0.9269y â 0.00027798)â (2 + y)? + 5.394672) â (280)
â8x° â 827y â 10.9554x? + 2ary? + 1.76901 xy â 0.00106244a + 2y° + 3.62336y? â 0.00053122y)â x - 4168614250 - 10-â â y?2.049216091 - 10-7 â 0.0279456a°-+ 43.087524y + 30.81132* + 43.10842°%y? + 68.9892 y + 41.63572° + 10.792827y? â 13.172627y?â 27.814827y â 0.00833715x? + 0.0139728ay* + 5.47537xry>+ 4.65089xy? + 0.00277916xy â 10.7858y° â 12.2664y* + 0.00436492y° .
We obtain the inequalities: | 1706.02515#260 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 261 | x · 4.168614250 · 10â7 â y22.049216091 · 10â7 â 0.0279456x5+ 43.0875x4y + 30.8113x4 + 43.1084x3y2 + 68.989x3y + 41.6357x3 + 10.7928x2y3â 13.1726x2y2 â 27.8148x2y â 0.00833715x2+ 0.0139728xy4 + 5.47537xy3 + 4.65089xy2 + 0.00277916xy â 10.7858y5 â 12.2664y4 + 0.00436492y3 > x · 4.168614250 · 10â7 â (0.01)22.049216091 · 10â7 â 0.0279456x5+ 0.0 · 43.0875x4 + 30.8113x4 + 43.1084(0.0)2x3 + 0.0 · 68.989x3 + 41.6357x3+ 10.7928(0.0)3x2 â 13.1726(0.01)2x2 â 27.8148(0.01)x2 â 0.00833715x2+ | 1706.02515#261 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 263 | We used x > 0.007 and x < 0.875 (reducing the negative x*-term to a x?-term). We have proofed the last inequality > 0 of Eq. (275).
Consequently the derivative is always positive independent of y, thus
aty)? ety)? 2. e om erfc (4) - 2" a erfc ( ot 7) (282) V2fe
is strictly monotonically increasing in x.
Next we show that the sub-function Eq. (111) is smaller than zero. We consider the limit:
. (ety)? aty (ety)? Qa + â) lim e 2 ~ erfe â 2e7 2 ~ erfc =~) =0 283 e090 ( V2Vz ) ( V2Vz es)
The limit follows from Lemma 22. Since the function is monotonic increasing in x, it has to approach 0 from below. Thus,
(ety? , faty (22+)? +) e 2= erfe â2e~ 22 ~ erfe 284 (55) (aA â
is smaller than zero. | 1706.02515#263 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 264 | (ety? , faty (22+)? +) e 2= erfe â2e~ 22 ~ erfe 284 (55) (aA â
is smaller than zero.
We now consider the derivative of sub-function Eq. (111) with respect to y. We proofed that sub- function Eq. (111) is strictly monotonically increasing independent of y. In the proof of Theorem 3, we need the minimum of sub-function Eq. (111). First, we are interested in the derivative of sub- function Eq. (111) with respect to y for the minimum x = 0.007 = 7/1000.
69
=
Consequently, we insert the minimum x = 0.007 = 7/1000 into the sub-function Eq. (111):
â
(ae) vais e\Â¥?V T0005 erfc (285) Yi4y 7 2 V3) ahs v2 2 eye =a) v2, /7 1000 1000 ee tute erfe ( + â) _ 2% (s00y-t7)? erfe (a + ") ; 20V'35 10V35
The derivative of this function with respect to y is | 1706.02515#264 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 265 | The derivative of this function with respect to y is
(~~ 4 1) ee tut ote erfe (= + â) _ (286) 7 20V35 1, coma? (500y + 7) erfe 500y+7)\ | 20 5 S 7 10/35 in (: + 1000 - cont) eA O.01+ ggg + OG HOD? (2 + 1000 + conn) _ 7 20/35 1 cruso0.0012 500 - 0.01 5 ode so (7+ 500 0.01) ere ( 00 0 ) +20,/=- > 3.56. 7 10V35 (Gs
For the ï¬rst inequality, we use Lemma 24. Lemma 24 says that the function xex2 erfc(x) has the sign of x and is monotonically increasing to 1â Ï . Consequently, we inserted the maximal y = 0.01 to make the negative term more negative and the minimal y = â0.01 to make the positive term less positive.
Consequently
zty)? c ety)? 2a e 3 erfe (4) â 26 erfe CG) (287)
is strictly monotonically increasing in y for the minimal x = 0.007. | 1706.02515#265 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 266 | is strictly monotonically increasing in y for the minimal x = 0.007.
Next, we consider x = 0.7 · 0.8 = 0.56, which is the maximal ν = 0.7 and minimal Ï = 0.8. We insert the minimum x = 0.56 = 56/100 into the sub-function Eq. (111):
â
(are) g(a. VE e\V7V tot ° erfc y + (288) V3, [56 V2 100
â
2
2 (ate ive) Qe\ ¥?V 105 erfc J2 56 100 00
The derivative with respect to y is:
solar) (24 +) ente (2% + Â¥2) : oy + Â¥ 27! 5 a - (289) Loe)â (2+) erte(S +) 5 A tae > pel âF~2R) (2 - 3255) erfe (4F - 2058) vi
70
â
27740015)" | 1706.02515#266 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 267 | 70
â
27740015)"
5 + 0.01·5 2 â 7 For the ï¬rst inequality we applied Lemma 24 which states that the function xex2 erfc(x) is monotoni- cally increasing. Consequently, we inserted the maximal y = 0.01 to make the negative term more negative and the minimal y = â0.01 to make the positive term less positive.
Consequently
iv? , (aty Qxty)? =) e 2 erfe | â~â ]} â 2eâ 2 ~ erfe | ââ 290 ( V2Se ) ( v2Va ow
is strictly monotonically increasing in y for x = 0.56.
Next, we consider x = 0.16 · 0.8 = 0.128, which is the minimal Ï = 0.8. We insert the minimum x = 0.128 = 128/1000 into the sub-function Eq. (111):
â
2 ( yy a) mee 128 . 000 e\ 2 V t600 ve erfc 128 aie (=e) 5 [BS #28. 1000 CBE ts enfe (tm) 20 ee oi + ~*) ; - (291)
2
The derivative with respect to y is: | 1706.02515#267 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 268 | 2
The derivative with respect to y is:
1 125y? 125y + 16 â (et tut os 16 (« (125y + 16) erfe (Gus Ovi0 )- (292) wm) *? es ") ° 1 («x \ snt-oonpe-t0rde onfe (* + 5000) _ (125y+32)? G25y 432)" . (125 32 2e~ 40 (125y 4 22) ene (= a 16 20/10 : 5 2000 (32 + 1250.01) erfe (a) +20)/22) > o.446s . 20V10 T
For the ï¬rst inequality we applied Lemma 24 which states that the function xex2 erfc(x) is monotoni- cally increasing. Consequently, we inserted the maximal y = 0.01 to make the negative term more negative and the minimal y = â0.01 to make the positive term less positive.
Consequently
tw? ay) (22-4)? CG) e 2 erfe ( 2e~ 2= ~ erfe | ââ (293)
is strictly monotonically increasing in y for x = 0.128. | 1706.02515#268 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 269 | is strictly monotonically increasing in y for x = 0.128.
Next, we consider x = 0.24 · 0.9 = 0.216, which is the minimal Ï = 0.9 (here we consider 0.9 as lower bound for Ï ). We insert the minimum x = 0.216 = 216/1000 into the sub-function Eq. (111):
â
â A y__, V toon ) 216 as, Vz . 1000 ve erfc - (294) 5 [26 1000
â 2
# (ae)
â
â
# y
216 1000
+
â
216 1000
2
# 2e
# erfc
â
zl 216
+
â
eo 1000
=
71
(291)
G25yt27)2, ( 125y + 27 G2zsyts4y2 â ( 125y + 54 e 6750 erfe ( 2 © =" ) _ 2¢ e750 â erfe ( â2 ES 15/30 15/30
The derivative with respect to y is: | 1706.02515#269 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 270 | The derivative with respect to y is:
1 (125y-427)2 125y + =") â { e3750 â (125y + 27) erfe ( ââ7> â ] â 295 7 ( (125y + 27) ( 15/30 >) a2syes4y? _ (125y + 2) 30 2e 6750 125y + 54) erfe +15 > (125y + 54) ( 15V30 7 1 (274125(-0.01))2 | (274 a) â | (274+ 125(â0.01))e 6750 erfe | ââââââââ_ ] - 7 (« (â0.01)) ( 15/30 5441280.01)? 5 . : 20 ee (54 4 1250.01) erfe (qa) + 15/9 ) > 0.211288 . 1530 T
For the ï¬rst inequality we applied Lemma 24 which states that the function xex2 erfc(x) is monotoni- cally increasing. Consequently, we inserted the maximal y = 0.01 to make the negative term more negative and the minimal y = â0.01 to make the positive term less positive.
Consequently | 1706.02515#270 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 271 | Consequently
iv? , (aty Qxty)? =) e 2 erfe â2e~ 2 ~ erfe 296 ( V2Se ) ( v2Va oo)
is strictly monotonically increasing in y for x = 0.216.
Lemma 46 (Monotone Derivative). For X = Aoi, @ = Qo and the domain â0.1 < pw < 0.1, â0.1 <w < 0.1, 0.00875 < v < 0.7, and 0.8 < T < 1.25. We are interested of the derivative of
T (a) erfc (â*) - gel âAHF ) erfe (A) . (297)
# Ï
The derivative of the equation above with respect to
⢠ν is larger than zero;
e 7 is smaller than zero for maximal v = 0.7, v = 0.16, and v = 0.24 (with 0.9 < T);
⢠y = ÂµÏ is larger than zero for Î½Ï = 0.00875 · 0.8 = 0.007, Î½Ï = 0.7 · 0.8 = 0.56, Î½Ï = 0.16 · 0.8 = 0.128, and Î½Ï = 0.24 · 0.9 = 0.216. | 1706.02515#271 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 272 | Proof. We consider the domain: â0.1 < uw < 0.1, -0.1 < w < 0.1, 0.00875 < v < 0.7, and 0.8<¢7 < 1.25.
We use Lemma|I7]to determine the derivatives. Consequently, the derivative of r (a) erfe (â*) acl FE) erte (â*))
# Ï
with respect to ν is larger than zero, which follows directly from Lemma 17 using the chain rule. Consequently, the derivative of
(<(a") erfc (â*) _ ae a) erfc (â5 ~*)) (299)
# Ï
with respect to y = ÂµÏ is larger than zero for Î½Ï = 0.00875 · 0.8 = 0.007, Î½Ï = 0.7 · 0.8 = 0.56, Î½Ï = 0.16 · 0.8 = 0.128, and Î½Ï = 0.24 · 0.9 = 0.216, which also follows directly from Lemma 17.
We now consider the derivative with respect to Ï , which is not trivial since Ï is a factor of the whole expression. The sub-expression should be maximized as it appears with negative sign in the mapping for ν.
72
(298) | 1706.02515#272 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 273 | 72
(298)
First, we consider the function for the largest ν = 0.7 and the largest y = ÂµÏ = 0.01 for determining the derivative with respect to Ï .
The expression becomes
(Ee) ry 4 (# ih ea ) ot 1 TO. Too c T r v2/%5) rfc | 12 100 | _oe\ V7 / erfe | 20-7 100 ; (300)
The derivative with respect to 7 is (7or+1)?
(7or+1)? . 707 +1 m | e 10007 (7007(77 + 20) â 1) erfe {| ââââ } - 301 (ve( ( )-1) (vas) G01) 20â (28007(7r +5) â L) erfe ( Mor +1 )) + 20/35(2107 â tv) 20V35/T)) (14000 Vm) ~*
We are considering only the numerator and use again the approximation of Ren and MacKenzie [30]. The error analysis on the whole numerator gives an approximation error 97 < E < 186. Therefore we add 200 to the numerator when we use the approximation Ren and MacKenzie [30]. We obtain the inequalities: | 1706.02515#273 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 274 | Or (7or+1 : 707 +1 a ( @Tra807 7007 (77 + 20) â 1) erfe va *(700r( )-0) (was) 1407 +1 20V35,/T 20 Et (28007(Tr +5) âlerfe ( )) + 20V35(2107 â 1) V7 < Vi 2.911(7007 (77 + 20) â y _ Va(2.911â-1)(707+1) 7Or+1 ' 2 20V35/T ryt (x sh) + 2.911 2. 2.911(28007(77 +5) â 1) 2 Vi(2.911-1)(1407+1) | 140741 1 2 20735 /7 ryt (4) + 2.911 + 20V35(2107 â 1),/7 + 200 = Vi (7007(77 + 20) â 1) (20- V35 - 2.911/7) _ V/n(2.911 â 1)(707 +1) + V0 . 2.911V35V7)" +7(707 + 1)? 2(2800r(7r +5) â 1) (20- V35- 2.911,/7) Vr(2.911 â 1)(1407 + 1) + (20: V35 - 2.911 | 1706.02515#274 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 276 | (302)
+ Ï(70Ï + 1)2
)
â
â
V72- 20 - 35 - 2.911(28007 (77 +5) â 1) vr (vaeon â1)(70r +1) + y (20 . 35 - 20llyF) + (707 + 1) ((vaeon â1)(70r +1) + (cove 2.911- vi). + -n(707 + ») -1 (vaeon â1)(1407 +1) + y (cove -2.911- vi) + m(1407 + ))
.
After applying the approximation of Ren and MacKenzie [30] and adding 200, we ï¬rst factored out 20
We now consider the numerator:
(20v35(2 Or â 1) Vr+ 200) (vem â1)(70r +1) + (20: V35 + 2. ouiy7) + n(707 +1) (303) | 1706.02515#276 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 277 | (303) (vaem â1)(1407 +1) + y (2 -V35 2.9 IVF). m (1407 ») + 2.911 - 20V35Vx(7007 (77 + 20) â 1) V7 (vaem ~1)(1407 +1) + (20 . 35 - 2.9 17). (1407 v) - Vr2- 20 - 35 - 2.911(28007 (77 +5) â 1) V7 (vaem â1)(707 + 1) + y (eo . 352.91 vi). + (707 + 0) = â 1.70658 x 10° (707 + 1)? + 1186357 79/2 + 200V35\/m(70r + 1)? + 1186357 V/(1407 + 1)? + 118635773/? + 8.60302 x 10° \/7(1407 + 1)? + 118635r77/? â 2.89498 x 10779/? â .21486 x 107 \/x(707 + 1)? + 11863577°/? + 8.8828 x 10° \/n (1407 + 1)? + 11863577°/? â 2.43651 x 10775/? â 1.46191 x 10°77/? + 2.24868 x 1077? + 94840.5./2(707 + | 1706.02515#277 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 278 | â 2.43651 x 10775/? â 1.46191 x 10°77/? + 2.24868 x 1077? + 94840.5./2(707 + 1)? + 11863577 + 47420.2/ (1407 + 1)? + 11863577 + 4818607 + 710.354V7 + 820.213,/7 /0(707 + 1)? + 1186357 + 677.432 \/n(707 + 1)? + 1186357 â 011.27 V7 /n(1407 + 1)? + 1186357 â 20V35/7 (707 + 1)? + 1186357 \/7 (1407 + 1)? + 1186357 + 200/71 (707 + 1)? + 1186357 (1407 + 1)? + 1186357 + 677.432,/7 (1407 + 1)? + 1186357 + 2294.57 = â 2.89498 x 107r9/? â 2.43651 x 10779/? â 1.46191 x 10°77/? + s (-1.70658 x 107r9/? â 1.21486 x 1077°/? + 94840.57 + 820.213/7 + 677.432) m(707 + 1)? + 1186357 + (8.60302 x 10°79/? + 8.8828 x 10°r5/? + | 1706.02515#278 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 281 | â 2.89498 x 10773/? â 2.43651 x 1077°/? â 1.46191 x 1097 7/24 (â1.70658 x 10773/? â 1.21486 x 10775/? + 820.213V1.25 + 1.25 - 94840.5 + 677.432) m(707 + 1)? + 1186357+ (8.60302 x 10°79/? + 8.8828 x 10°r5/? â 1011.27V0.8 + 1.25 - 47420.2 + 677.432) s/m(1407 + 1)? + 1186357+ (4200 3573/2 â 20V35 V7 + 200) /m(70r + 1)? + 1186357 (1407 + 1)? + 1186357+ 2.24868 x 10"r? + 710.354V1.25 + 1.25 - 481860 + 2294.57 = â 2.89498 x 10779/? â 2.43651 x 10779/? â 1.46191 x 1097 7/24 â1.70658 x 10°r3/? â 1.21486 x 1077>/? + 120145.) m(707 + 1)? + 1186357+ 8.60302 x 10°79/? + 8.8828 x 10°7°/? + | 1706.02515#281 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 282 | + 120145.) m(707 + 1)? + 1186357+ 8.60302 x 10°79/? + 8.8828 x 10°7°/? + 59048.2) m(1407 + 1)? + 1186357+ 4200V357°/? â 20V35/7 + 200) Va(70r + 1)? + 1186357 (1407 + 1)? + 11863574 2.24868 x 10°r? + 605413 = â 2.89498 x 10773/? â 2.43651 x 107r°/? â 1.46191 x 1097 7/24 8.60302 x 10°7/? + 8.8828 x 10°r°/? + 59048.2) s/196007(r + 1.94093)(7 + 0.0000262866)+ â1.70658 x 10°r3/? â 1.21486 x 1077>/? + 120145.) 9/4900 (7 + 7.73521) (7 + 0.0000263835)-+ 4200V3573/2 â 20/357 + 200) s/196007(r + 1.94093) (7 + 0.0000262866) \/49007(7 + 7.73521) (7 + 0.0000263835)+ 2.24868 x | 1706.02515#282 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 283 | (7 + 0.0000262866) \/49007(7 + 7.73521) (7 + 0.0000263835)+ 2.24868 x 10'r? + 605413 < â 2.89498 x 10773/? â 2.43651 x 107r°/? â 1.46191 x 1097 7/24 (8.60302 x 10°79/? + 8.8828 x 1087°/? + 59048.2) 196007 (7 + 1.94093)7+ (-1.70658 x 10%r9/? â 1.21486 x 10779/? + 120145.) 949007 1.00003(7 + 7.73521)7+ (4200 3573/2 â 20V35V7 + 200) 4/1960071.00003(7 + 1.94093)r s/490071.00003(r + 7.73521)T+ 2.24868 x 10°r? + 605413 = â 2.89498 x 107r3/? â 2.43651 x 1077>/? â 1.46191 x 1097/24 | 1706.02515#283 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 286 | 2.89498 x 10°7r3/? â 2.43651 x 1077°/? â 1.46191 x 109r7/? + 2.24868 x 1077? + 605413 = â 4.84561 x 1077/2 + 4.07198 x 10°7°/? â 1.46191 x 10977/2â 4.66103 x 10°? â 2.34999 x 10°7?+ 3.29718 x 10°r + 6.97241 x 10â \/7 + 605413 < 60541373/? 0.83/2 4.07198 x 109r°/? â 1.46191 x 10°77/?â 3.29718 x LO" /7r 6.97241 x 10% r/r V0.8 0.8 73/2 (â4.66103 x 1083/2 â 1.46191 x 1097? â 2.34999 x 10°V/7+ â 4.84561 x 1073/24 4.66103 x 10°? â 2.34999 x 10°7? 4 4.07198 x 10°r + 7.64087 x 107) < 7 7 ee (~s.00103 x 10%r4/2 â 1.46191 x 10%7? 4 TOAST x10" V7 v0.8 | 1706.02515#286 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 288 | â 4.14199 Ã 107Ï 2 < 0 .
First we expanded the term (multiplied it out). The we put the terms multiplied by the same square root into brackets. The next inequality sign stems from inserting the maximal value of 1.25 for 7 for some positive terms and value of 0.8 for negative terms. These terms are then expanded at the =-sign. The next equality factors the terms under the squared root. We decreased the negative term by setting T = 7 + 0.0000263835 under the root. We increased positive terms by setting tT + 0.000026286 = 1.000037 and 7 + 0.000026383 = 1.000037 under the root for positive terms. The positive terms are increase, since 9-8+0-000026383 â 1 (0003, thus r + 0.000026286 < r + 0.000026383 < 1.00003r. For the next inequality we decreased negative terms by inserting 7 = 0.8 and increased positive terms by inserting 7 = 1.25. The next equality expands the terms. We use upper bound of 1.25 and lower bound of 0.8 to obtain terms with corresponding exponents of T. For the last <-sign we used the function
â1.46191 Ã 109Ï 3/2 + 4.07198 Ã 109â | 1706.02515#288 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 289 | â1.46191 Ã 109Ï 3/2 + 4.07198 Ã 109â
Ï â 4.66103 Ã 108Ï â 2.26457 Ã 109 (304) The derivative of this function is
â2.19286 Ã 109â Ï + 2.03599 Ã 109 â Ï â 4.66103 Ã 108 (305)
and the second order derivative is
â 1.01799 Ã 109 Ï 3/2 â 1.09643 Ã 109 â Ï < 0 . (306)
The derivative at 0.8 is smaller than zero:
â 2.19286 Ã 109 â 0.8 â 4.66103 Ã 108 + 2.03599 Ã 109 0.8 â = (307)
â 1.51154 Ã 108 < 0 .
Since the second order derivative is negative, the derivative decreases with increasing Ï . Therefore the derivative is negative for all values of Ï that we consider, that is, the function Eq. (304) is strictly monotonically decreasing. The maximum of the function Eq. (304) is therefore at 0.8. We inserted 0.8 to obtain the maximum.
76
Consequently, the derivative of
1 (CY ete (â*) 96 AE). ont fc e(â*)) (308)
with respect to Ï is smaller than zero for maximal ν = 0.7. | 1706.02515#289 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 290 | with respect to Ï is smaller than zero for maximal ν = 0.7.
Next, we consider the function for the largest ν = 0.16 and the largest y = ÂµÏ = 0.01 for determining the derivative with respect to Ï .
The expression becomes 16r 1 2
16r 1 2 2167 1 2 (# Too * Too i 16r ( x1 ) 2167 | 7 ; or t r v2/i00/ erfe x00 + 700 100) _ ¢\ v2V/ tt J erfc | 200 = 100 : (309) [167 167 v2 100 âvay 100
The derivative with respect to Ï is
( (SS care +25) â Lerfe ( 16741 ) _ G10) 402.7 r+1 327 +1 2¢e S00" (1287 (87 + 25) â 1 erfe + 40\/2(487 â 1) V7 *(128r(r-+ 25) ~ ene (ET) ) + anv as â Dv) (3200 /nr) *
We are considering only the numerator and use again the approximation of Ren and MacKenzie [30]. The error analysis on the whole numerator gives an approximation error 1.1 < E < 12. Therefore we add 20 to the numerator when we use the approximation of Ren and MacKenzie [30]. We obtain the inequalities: | 1706.02515#290 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 291 | vi(¢ OSs (1287 (2 27 + 25) 1) ert ( 16741 ) 40V 2/7 327 +1 mas) + 40V/2(487r â1)/7 < (32741)? . 2e~ 32007 (1287 (87 + 25) â 1) erfe ( 2.911(1287(27 + 25) â 1) Vr 2 Vm(2.911-1)(167+1) | 16741 I 2 40V2/7 ryt (a2¢4) + 2.911 2+ 2.911(1287(87 + 25) â 1) 2 Va(2.911â1)(327+1) , 32741 j 2 40V2V7 ryt (224) + 2.911 + 40V/2(487 â 1) V7 +20 = (1287 (27 + 25) â 1) (40V22.911,/r) Jn (2.911 â 1)(167 + 1) + \ (4ov2.911V7)" + (167 + 1)? 2(1287(87 + 25) â 1) (40,/22.911/7) Vn(2.911 â 1)(327 +1) + \(4ov2.911V7)" + (327 + 1)? 40V/2(487 â 1) /7 +20 | 1706.02515#291 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 293 | (311)
0)
+ Ï(16Ï + 1)2
77
â
â 2
â
2.911 - 40V2V/7 (1287 (27 + 25) â 1) V7 (vaeon ~1)(32r +1) + y (sov olivz) + 1(327 +1) â)- 2V/740/22.911 (1287 (87 + 25) â 1) Vr (vieon â 1)(16r +1) +y/ (ove ovr) + (167 + )) (Cae â 1)(327 +1) + y (aova2.011vF)â + (327 + 0) -1 (vizon â1)(32r +1) + (sov% ouiyF) + (327 +1)? ))
.
After applying the approximation of Ren and MacKenzie [30] and adding 20, we ï¬rst factored out 40
We now consider the numerator:
2 (40v2(48r -~vr+ 20) (vaeon â 1)(167 +1) + y (sova2.001y) + m(16r + 0) (312) | 1706.02515#293 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 294 | (312) A (vaeon â1)(327 +1) + V 2.911 - 40V2V/7 (1287 (27 + 25) â 1) V7 (vaeon â1)(327 +1) + V 4ov22.911V7). + 1(327 + 0) - 2/740 22.911 (1287(87 + 25) â 1)/7 (vacon â1)(167 +1) + / â 1.86491 x 10° (167 + 1)? + 27116.5779/24 1920V2./m(16r + 1)? + 27116.57 V/7(327 + 1)? + 27116.57 79/24 940121 /7(327 + 1)? + 27116.577°/? â 3.16357 x 10°79/?â 303446 7 (167 + 1)? + 27116.577°/? + 221873 ,/7(327 + 1)? + 27116.577°/? â 6085887°/? â 8.34635 x 10°r7/? + 117482.77 + 2167.78\/n(167 + 1)? + 27116.577+ 1083.89 \/7(32r + 1)? + 27116.577+ 11013.97 + 339.614\/F + 392.137, /7\/n(167 + 1)? + | 1706.02515#294 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 295 | \/7(32r + 1)? + 27116.577+ 11013.97 + 339.614\/F + 392.137, /7\/n(167 + 1)? + 27116.57-+ 67.7432,/m (167 + 1)? + 27116.57 â 483.4787 (327 + 1)? + 27116.57â 40V 2/7 /(167 + 1)? + 27116.57 \/7(327 + 1)? + 27116.57+ 20./ (167 + 1)? + 27116.57 \/1(327 + 1)? + 27116.57+ 67.7432 \/7(327 + 1)? + 27116.57 + 229.457 = â 3.16357 x 10°7°/? â 60858875/? â 8.34635 x 1077/24 (-1.86491 x 1053/2 â 30344675/2 4 2167.787 + 392.137/7 + 67.7432) ov2.911y7). + (327 + 0) + fs oy22.911VF) + n(16r + 0) = m(167 + 1)? + 27116.57+ (94012179? + 2218737°/? + 1083.897 â 483.478,/7 + 67.7432) | 1706.02515#295 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 297 | m(327 + 1)? + 27116.57 + 1920V2r3/? â 40V 2/7 + 20) s/n (167 + 1)? + 27116.57 V/n(327 + 1)? + 27116.57 + 117482.7? + 11013.97 + 339.6147 + 229.457 < â 3.16357 x 10°r3/? â 6085887°>/? â 8.34635 x 10°r7/24 ~1.86491 x 10°r3/? â 30344675/? + 392.187V/1.25 + 1.252167.78 + 67.7432) s/n (167 + 1)? + 27116.57+ 94012179/? + 2218737°/? â 483.478V0.8 + 1.251083.89 + 67.7432) s/7(827 + 1)? + 27116.57+ 1920V2r9/? â 40V2V7 + 20) (167 + 1)? + 27116.57 (327 + 1)? + 27116.57+ 117482.r? + 339.614V1.25 + 1.2511013.9 + 229.457 = â 3.16357 x 10°r3/? â 6085887°>/? â 8.34635 x 10°r7/24 | 1706.02515#297 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 298 | + 229.457 = â 3.16357 x 10°r3/? â 6085887°>/? â 8.34635 x 10°r7/24 â1.86491 x 10°r9/? â 30344675/? + 3215.89) s/n(16r + 1)? + 27116.57+ 94012179/? + 2218737°/? + 990.171) m(327 + 1)? + 27116.57+ 1920V2r3/? â 40V 2/7 + 20) s/n (167 + 1)? + 27116.57 V/n(327 + 1)? + 27116.57 + 1174827? + 14376.6 = â 3.16357 x 10°r3/? â 6085887°>/? â 8.34635 x 10°r7/24 94012179/? + 2218737°/? + 990.171) s/10247 (7 + 8.49155)(7 + 0.000115004)+ â1.86491 x 10°79/? â 30344675/? + 3215.89) \/256n(7 + 33.8415)(7 + 0.000115428)+ 1920V2r3/? â 40/2\/7 + 20) s/10247(r + 8.49155)(7 + 0.000115004) »/256n(7 + | 1706.02515#298 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 299 | â 40/2\/7 + 20) s/10247(r + 8.49155)(7 + 0.000115004) »/256n(7 + 33.8415) (7 + 0.000115428)+ 117482.7? + 14376.6 < â 3.16357 x 10°r3/? â 6085887°>/? â 8.34635 x 10°r7/24 94012179/? + 2218737°/? + 990.171) s/102471.00014(7 + 8.49155)7+ 1920V2r3/? â 40V2/F + 20) 9/25671.00014(7 + 33.8415)7 10247 1.00014 (7 + 8.49155)7-+ ~1.86491 x 10°r3/? â 3034467°/? + 3215.89) \/2560(7 + 33.8415)T+ 117482.7? + 14376.6 = â 3.16357 x 10°7°/? â 60858875/? â 8.34635 x 1077/24 | 1706.02515#299 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 300 | 117482.7? + 14376.6 = â 3.16357 x 10°7°/? â 60858875/? â 8.34635 x 1077/24
â
â9100379/? + 4.36814 x 10°79/? + 32174.4r) 1.25852 x 10°73 + 5.33261 x 10â7? + 56165.1/7) â8.60549 x 10°7* â 5.28876 x 10'r? + 91200.4V/r)
Ï + 33.8415 + 117482.Ï 2+
V7 + 8.49155/7 V7 + 8.49155+ Vr + 33.8415
â
Ï + 8.49155+
+ 33.8415 + 14376.6 <
â
â91003Ï 3/2 + 4.36814 Ã 106Ï 5/2 + 32174.4Ï
+
1.25 + 33.8415 | 1706.02515#300 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 302 | 79
â 4.84613 x 10%r3/? + 8.01543 x 1077>/? â 8.34635 x 106r7/?â 1.13691 x 107? â 1.44725 x 108774 594875.r + 712078.\/7 + 14376.6 < 14376.673/2 0.8/2 8.01543 x 1077°/? â 8.34635 x 10°r7/2â 594875./Tr | 712078.7./7 vos 0.8 â 3.1311 - 10°r?/? â 1.44725 - 1087? + 8.01543 - 1077°/? â 1.13691 - 10773 8.34635 - 10°77/? < 3.1311 x 10%78/2 4 8.01543 x < 1.2575/? 8.34635 x 10°r7/? â 1.13691 x 1077? â 1.44725 x 108+? = â 3.1311 x 10°r9/? â 8.34635 x 10°r7/? â 1.13691 x 10773 â 5.51094 x 10â772 < 0. â 4.84613 x 10%r3/24 1.13691 x 1077? â 1.44725 x 10°7? 4 | 1706.02515#302 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 303 | First we expanded the term (multiplied it out). The we put the terms multiplied by the same square root into brackets. The next inequality sign stems from inserting the maximal value of 1.25 for 7 for some positive terms and value of 0.8 for negative terms. These terms are then expanded at the =-sign. The next equality factors the terms under the squared root. We decreased the negative term by setting T = 7 + 0.00011542 under the root. We increased positive terms by setting 7 + 0.00011542 = 1.000147 and 7 + 0.000115004 = 1.000147 under the root for positive terms. The positive terms are increase, since 2S+0-00011542 < 1 000142, thus 7 + 0.000115004 < r + 0.00011542 < 1.000147. For the next inequality we decreased negative terms by inserting t = 0.8 and increased positive terms by inserting 7 = 1.25. The next equality expands the terms. We use upper bound of 1.25 and lower bound of 0.8 to obtain terms with corresponding exponents of T.
Consequently, the derivative of
1 (ele) erfc (â*) _ 2¢ HE) erfc (â>*)) (313)
with respect to Ï is smaller than zero for maximal ν = 0.16. | 1706.02515#303 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 304 | with respect to Ï is smaller than zero for maximal ν = 0.16.
Next, we consider the function for the largest v = 0.24 and the largest y = pw = 0.01 for determining the derivative with respect to 7. However we assume 0.9 < 7, in order to restrict the domain of 7.
# The expression becomes
(4 Too + Too xb 2) 247 + 1 ( Too + 700 ) 2247 , 1 . Dar t r v2 385) orfe | 1007 100 | _ .\ eV a5 erfe | 200 100 : (314) 247 247 v2 Too aes 100
The derivative with respect to 7 is (24741)?
(24741)? . 247 +1 mT ( e 00r~ (1927(37 + 25) â 1) erfe | ââ-â ] - (ve (evar 29 â net (aye) 26 Stk (1927 (127 + 25) 1)exte (5) + 40v3(72r ~ 1)v7) (4800/r)~ (315) | 1706.02515#304 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 306 | (48741)? 487 +1 Qe~ a0" (1927 (127 + 25) â 1) erfc + 40V3(727 â 1 < e (1927 (127 5) ) erfc (e5)) (727 VT < 2.911(1927(37 + 25) â 1) Va 2 Ve(2.911â-1)(247+1) , 24741 f 2 10V3V7 ' n (204) + 2.911 2 2.911(1927 (127 + 25) â 1) 2 Vi(2.911-1)(487+1) | m( 487+1 ) + 2.9112 40V3/7 ' 40V3/T 40V3(727 â 1) V7 +32 = Vi ( (1927(37 + 25) â 1) (40V32.911/7) Va(2.911 â 1)(247 +1) + \ (4ov32.911v7)" + (247 +1)? 2(1927 (127 + 25) â 1) (40V32.911\/7) ; Va(2.911 = 1)(487 +1) + Vdove2.911 7)" + (487 +1)? 40V3(727 â 1) V7 +32 = ((avace: âDyrt 32) (vacon â 1)(247 +1) + | 1706.02515#306 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 307 | + (487 +1)? 40V3(727 â 1) V7 +32 = ((avace: âDyrt 32) (vacon â 1)(247 +1) + y (sovazonye)â 4 (247 + 0) 0) + 2.911 - 40V3.V7(1927 (37 + 25) â 1)/7 (vavon â1)(487 +1) + (sovia olivz) + 1(487 + 0) - 2/740/32.911(1927(127 + 25) â 1) Vr (vaeon â1)(247 +1) + | (ova. ouivz)â + (247 + 1)? )) (vavon ~ 1)(487 + 1) + (sova2 giyr) + n(48r 4 ((vaemn â1)(247 +1) + y (aovaz.onye)â + (247 + 0) -1 (vaeon ~ 1)(487 + 1) + | (sovan olivz) + (487 +1) )) | 1706.02515#307 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 309 | (veem ~1)(48r +1) + y (aovaz.on1yr)â + (487 + ») - 2/740V/32.911(1927 (127 + 25) â 1) /7 (veem ~1)(24r +1) + y (ovaz.snve)â + (247 + 0) = â 3.42607 x 10° \/m(247 + 1)? + 40674.8773/7+ 2880V3\/ (247 + 1)? + 40674.87 \/7(487 + 1)? + 40674.8779/2 4 1.72711 x 10° \/n(48r + 1)? + 40674.8779/? â 5.81185 x 10°r3/? â 836198,/7(247 + 1)? + 40674.877°/? + 6114107 (48r + 1)? + 40674.877°/?â 1.67707 x 10°7°/? â 3.44998 x 10777/? + 422935.7? + 5202.68 /7 (247 + 1)? + 40674.877-+ 2601.34/7 (487 + 1)? + 40674.877 + 26433.47 + 415.94\/7 + 480.268,/7 \/m(247 + 1)? + 40674.87 + | 1706.02515#309 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 310 | + 1)? + 40674.877 + 26433.47 + 415.94\/7 + 480.268,/7 \/m(247 + 1)? + 40674.87 + 108.389 /7(247 + 1)? + 40674.87 â 592.138 V7 /2(487 + 1)? + 40674.87â 40V3/7 (247 + 1)? + 40674.87 V7 (487 + 1)? + 40674.87 + 32/7 (247 + 1)? + 40674.87 V/7(487 + 1)? + 40674.87 + 108.389 \/7(48r + 1)? + 40674.87 + 367.131 = â 5.81185 x 10°r3/? â 1.67707 x 10°r°/? â 3.44998 x 1077/24 â3.42607 x 10°7*/? â 8361987°/? + 5202.687 + 480.268/7 + 108.389) m (247 + 1)? + 40674.87+ 1.72711 x 10°r/? + 6114107°/? + 2601.347 â 592.138/7 + 108.389) (487 + 1)? + 40674.87-+ 2880V3r3/? â 40V3.V7 + 32) V/n(247 + 1)? + 40674.87 | 1706.02515#310 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 311 | + 1)? + 40674.87-+ 2880V3r3/? â 40V3.V7 + 32) V/n(247 + 1)? + 40674.87 \/7(487 + 1)? + 40674.87+ 422935.7? + 26433.47 + 415.94\/7 + 367.131 < â 5.81185 x 1073/2 â 1.67707 x 10°7°/? â 3.44998 x 1077/24 â3.42607 x 10°r/? â 8361987°/? + 480.268V1.25 + 1.255202.68 + 108.389) V1(247 + 1)? + 40674.87+ 1.72711 x 10°r3/? 4+ 6114107°/? â 592.138V0.9 + 1.252601.34 + 108.389) (487 + 1)? + 40674.87-+ 2880V37°/? â 40V3V7 + 32) Va(24r + 1)? + 40674.87 \/7 (487 + 1)? + 40674.87+ 229357? 415.94V1.25 1.2526433.4 367.131 ~ | 1706.02515#311 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 312 | 229357? + 415.94V1.25 + 1.2526433.4 + 367.131 = â 5.81185 x 10°79/? â 1.67707 x 10°r5/? â 3.44998 x 10777/24 ~
1.25 + 1.2526433.4 + 367.131 =
+ 7148.69) m(247 + 1)? + 40674.87-+ + 2798.31) s/7(487 + 1)? + 40674.87+ V/n(247 + 1)? + 40674.87 \/7(487 + 1)? + 40674.87+
â3.42607 Ã 106Ï 3/2 â 836198Ï 5/2 + 7148.69
1.72711 Ã 106Ï 3/2 + 611410Ï 5/2 + 2798.31 â 3
â
â
3Ï 3/2 â 40
2880
Ï + 32
422935Ï 2 + 33874 =
82 | 1706.02515#312 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 313 | â 5.81185 x 1073/2 â 1.67707 x 10°7°/? â 3.44998 x 1077/24 1.72711 x 10°73/? + 6114107°/? + 2798.31) 4/2304x(7 + 5.66103) (7 + 0.0000766694)+ ~3.42607 x 10°r3/? â 8361987°/? + 7148.69) V/576n(7 + 22.561)(7 + 0.0000769518)+ 2880V3r3/? â 40V3.V7 + 32) 23041 (r + 5.66103)(7 + 0.0000766694) /576n(7 + 22.561)(7 + 0.0000769518)+ 229357? + 33874 < â 5.8118510°r?/? â 1.67707 x 10°r°/? â 3.44998 x 1077/24 1.72711 x 10°73/? + 6114107°/? + 2798.31) 923041 1.0001 (7 + 5.66103)7+ ~ 2880V37°/? â 40V3V7 + 32) Â¥/230411.0001(7 + 5.66103)7 /57671.0001(7 + 22.561)r+ ~3.42607 x | 1706.02515#313 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 314 | + 32) Â¥/230411.0001(7 + 5.66103)7 /57671.0001(7 + 22.561)r+ ~3.42607 x 10°r3/? â 8361987°/? + 7148.69) 576m(7 + 22.561)r+ 4229357? + 33874. = â 5.8118510°r?/? â 1.67707 x 10°r°/? â 3.44998 x 1077/24 2 a 0764.79/2 + 1.8055 x 1079/2 4 115823.7) V7 + 5.661037 + 22.561 + 422935.774+ 5.20199 x 10â? + 1.46946 x 10°r? + 238086./7) V7 + 5.66103-+ â3.55709 x 10â â 1.45741 x 1087? + 304097../r) Vr + 22.561 + 33874. < V1.25 + 5.06103 1.25 + 22.561 (â250764.7° + 1.8055 x 1075/2 4 115823.) + V1.25 + 5.66103 (5.20199 x 1077? + 1.46946 x 10°77 + 238086../7) + | 1706.02515#314 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 315 | 115823.) + V1.25 + 5.66103 (5.20199 x 1077? + 1.46946 x 10°77 + 238086../7) + V0.9 + 22.561 (â3.55709 x 10°r? â 1.45741 x 10°r? + 304097./7) â 5.8118510°r?/? â 1.67707 x 10°r°/? â 3.44998 x 10777/? + 422935.7? + 33874. < 33874.73/? 0.93/2 3.5539 x 10773 â 3.19193 x â 9.02866 x 10°7/? + 2.29933 x 10°r°/? â 3.44998 x 10777/2â 082 4 1.48578 x 10°./r7 ; 2.09884 x L08rV/7 V0.9 0.9 â 5.09079 x 10°r3/? + 2.29933 x 10°79/?â 3.44998 x 1077/2 â 3.5539 x 1077? â 3.19193 x 1087? < 2.29933 x 108./1.2575/? JT 3.5539 x 1077? â 3.19193 x 1087? = â 5.09079 x | 1706.02515#315 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 317 | First we expanded the term (multiplied it out). The we put the terms multiplied by the same square root into brackets. The next inequality sign stems from inserting the maximal value of 1.25 for 7 for some positive terms and value of 0.9 for negative terms. These terms are then expanded at the =-sign. The next equality factors the terms under the squared root. We decreased the negative term by setting r = 7 + 0.0000769518 under the root. We increased positive terms by setting T + 0.0000769518 = 1.00009627 and 7 + 0.0000766694 = 1.00009627 under the root for positive terms. The positive terms are increase, since 0-8:+0.0000769518 < 1.0000962, thus T + 0.0000766694 < 7 + 0.0000769518 < 1.00009627. For the next inequality we decreased negative terms by inserting 7 = 0.9 and increased positive terms by inserting 7 = 1.25. The next
83
equality expands the terms. We use upper bound of 1.25 and lower bound of 0.9 to obtain terms with corresponding exponents of Ï .
Consequently, the derivative of
# Ï | 1706.02515#317 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 318 | 83
equality expands the terms. We use upper bound of 1.25 and lower bound of 0.9 to obtain terms with corresponding exponents of Ï .
Consequently, the derivative of
# Ï
wut \? wd vr)? (clea) erfc (ââ*) _ 9¢( Gz) erfe re (4) (318) V2V0T V2V0T
with respect to 7 is smaller than zero for maximal v = 0.24 and the domain 0.9 < 7 < 1.25.
Lemma 47. In the domain â0.01 < y < 0.01 and 0.64 < a < 1.875, the function f(x,y) = 2 (2U+*) erfe =) has a global maximum at y = 0.64 and x = 0.01 and a global minimum at y = 1.875 and x = 0.01.
Proof. f (x, y) = e 1 with respect to x is negative: 2 (2y+x) erfc 2x is strictly monotonically decreasing in x, since its derivative | 1706.02515#318 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 319 | eo (Vax 3/2¢ = onfe (£4) + V2w-2)) 2/rx3/2 <0 3/2 (etu)? (=) <> Vraâl*e =~ erfe + V2(yâ2) <0 vo. Vivi (yâ 2) (ew? . c+y Viv! erfe ( ) + viy-2) < Vai ~ Qe°3/2 * =+ yV2-2V2< Fee +4 seu + (Sw ys 2- 06s + 0.01V2 â 0.642 = â0.334658 < 0. (319) 0.01+0.64 , , /(0.01+0.64)? | 4 v2V0.64 | 2-0.64 Ur
The two last inqualities come from applying Abramowitz bounds [22] [22] and from the fact that the expression SE +yV2 â v2 does not change monotonicity in the domain and hence ety 4,/ (ty
the maximum must be found at the border. For x = 0.64 that maximizes the function f (x, y) is monotonically in y, because its derivative w.r.t. y at x = 0.64 is | 1706.02515#319 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.