doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1706.02515 | 76 | ⢠Equalities only solve square root and factor out the resulting terms (2(2x + y) + 1) and (2(x + y) + 0.878).
⢠We set α = α01 and multiplied out. Thereafter we also factored out x in the numerator. Finally a quadratic equations was solved.
16
=
The sub-function has its minimal value for minimal x = v7 = 1.5-0.8 = 1.2 and minimal y = pw = â1-0.1 = â0.1. We further minimize the function ww fu 12 0.1
ww fu 12 0.1 )) wwe 27 {2 âerfc > â0.le2T2 | 2 â erfc | â . 34 me (2-e (a) (-«(aym)) 0
Ëξ(µ, Ï, ν, Ï, λ, α) in Eq. (25):
We compute the minimum of the term in brackets of 2? pw wwe 27 | 2 âerfe | âââ } ] + ' ( (4)
We compute the minimum of the term in brackets of HEU, w,v,T, A, a) in Eq. (25):
# µ2 Ï2 2Î½Ï | 1706.02515#76 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 77 | We compute the minimum of the term in brackets of HEU, w,v,T, A, a) in Eq. (25):
# µ2 Ï2 2νÏ
2? pw wwe 27 | 2 âerfe | âââ } ] + 35 ' ( (4) o> wootur)? w+ VT petaur)? pwd + 2vT 2 aR (- (a) erfc (â*) - el a) erfc (â= +4/=VvtT > âi Vij ViVi * â 2 â 2-0. 2 . _ 02, (- (eC #12) erfe () â (AR) erfe A ))) - V2v1.2 V2.2 0.1? 0.1 2 0.le212 | 2 âerfc + V1.2 0.212234 . ( (av) Viz
Ëξ(µ, Ï, ν, Ï, λ, α) has the sign Therefore the term in brackets of Eq. (25) is larger than zero. Thus, â âµ of Ï. Since Ëξ is a function in ÂµÏ (these variables only appear as this product), we have for x = ÂµÏ | 1706.02515#77 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 78 | â âν Ëξ = â âx Ëξ âx âµ = â âx Ëξ Ï (36)
and
â âÏ Ëξ = â âx Ëξ âx âÏ = â âx Ëξ µ . (37)
â âÏ Ëξ has the sign of Ï, â âµ
µ Ï â âµ Ëξ(µ, Ï, ν, Ï, λ01, α01) = Ëξ(µ, Ï, ν, Ï, λ01, α01) . (38)
Since â âµ Ëξ has the sign of µ. Therefore
â âÏ g(µ, Ï, ν, Ï, λ01, α01) = â âÏ Ëξ(µ, Ï, ν, Ï, λ01, α01) (39) | 1706.02515#78 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 79 | has the sign of ju. We now divide the ji-domain into â1 < ys < Oand0 < p < 1. Analogously we divide the w-domain into â0.1 <w < Oand0 <w < 0.1. In this domains g is strictly monotonically.
For all domains g is strictly monotonically decreasing in v and strictly monotonically increasing in T. Note that we now consider the range 3 < v < 16. For the maximal value of g we set v = 3 (we set it to 3!) and 7 = 1.25.
We consider now all combination of these domains:
e -l<yw<O0and-0.1<w<0:
g is decreasing in µ and decreasing in Ï. We set µ = â1 and Ï = â0.1.
g(â1, â0.1, 3, 1.25, λ01, α01) = â0.0180173 .
e -l<w<O0and0<w<01:
g is increasing in µ and decreasing in Ï. We set µ = 0 and Ï = 0.
g(0, 0, 3, 1.25, λ01, α01) = â0.148532 . (41)
e©0<w<land-0.l<w<0: | 1706.02515#79 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 81 | Therefore the maximal value of g is â0.0180173.
# A3.3 Proof of Theorem 3
First we recall TheoremB} Theorem (Increasing v). We consider X = Xo1, @ = agi and the two domains Qy {(u,w,v,T) | â01 < w < 0.1,-0.1 < w < 0.1,0.05 < v < 0.16,0.8 < r < 1.25} and OF = {(1,4,Â¥,7) | â0.1< p< 01,-0.1 <w <0.1,0.05 <v < 0.24,0.9 <7 < 1.25}.
The mapping of the variance Ëν(µ, Ï, ν, Ï, λ, α) given in Eq. (5) increases
D(U,W,V,T,Ao1,Q01) > Y (44) in both QF and Q5. All fixed points (41, v) of mapping Eq. (5) and Eq. (4) ensure for 0.8 < 7 that D > 0.16 and for 0.9 < 7 that D > 0.24. Consequently, the variance mapping Eq. 5) and Eq. (A ensures a lower bound on the variance v. | 1706.02515#81 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 82 | Proof. The mean value theorem states that there exists a t â [0, 1] for which Ëξ(µ, Ï, ν, Ï, λ01, α01) â Ëξ(µ, Ï, νmin, Ï, λ01, α01) = â âν
Ëξ(µ, Ï, ν + t(νmin â ν), Ï, λ01, α01) (ν â νmin) . (45)
Therefore
Ëξ(µ, Ï, ν, Ï, λ01, α01) = Ëξ(µ, Ï, νmin, Ï, λ01, α01) + â âν Ëξ(µ, Ï, ν + t(νmin â ν), Ï, λ01, α01) (ν â νmin) . (46)
Therefore we are interested to bound the derivative of the ξ-mapping Eq. (13) with respect to ν: | 1706.02515#82 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 84 | The sub-term Eq. (308) enters the derivative Eq. with a negative sign! According to LemmalI8} the minimal value of sub-term Eq. (308) is obtained by the largest largest v, by the smallest 7, and the largest y = jw = 0.01. Also the positive term erfc (4) + 2 is multiplied by 7, which is minimized by using the smallest 7. Therefore we can use the smallest 7 in whole formula Eq. to lower bound it. First we consider the domain 0.05 < v < 0.16 and 0.8 < 7 < 1.25. The factor consisting of the 1.0.01 exponential in front of the brackets has its smallest value for e~ 0.05-0-8 , Since erfe is monotonically decreasing we inserted the smallest argument via erfc (- oats! in order to obtain the maximal negative contribution. Thus, applying LemmajI8} we obtain the lower bound on the derivative: 122 . wir)? mw +2u7\? 12 G (- (a) erfe (â*) 2 BEY ente (â*))) _ 2 V2 vt V2 /vT | 1706.02515#84 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 86 | 1 2
For applying the mean value theorem, we require the smallest (1). We follow the proof of Lemmals| which shows that at the minimum y = jw must be maximal and x = vt must be minimal. Thus, the smallest E(ju,w,v, 7, Aoi, 01) is â¬(0.01, 0.01, 0.05, 0.8, Ao1, 201) = 0.0662727 for 0.05 < v and 0.8 <7. Therefore the mean value theorem and the bound on (j1)? (Lemma[43} provide = E(,w,V,7, Nor, Q01) â (fi(u,w,Y,7, Aor, 01)â > (49) 0.0662727 + 0.969231(v â 0.05) â 0.005 = 0.01281115 + 0.969231v > 0.08006969 - 0.16 + 0.969231lv > 1.049301lv > Vv. | 1706.02515#86 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 87 | Next we consider the domain 0.05 < v < 0.24 and 0.9 < 7 < 1.25. The factor consisting of the exponential in front of the brackets has its smallest value for e~ 30.05-0-9 , Since erfe is monotonically . . se, (0.01 . . . decreasing we inserted the smallest argument via erfc ( Jave0e05 her | in order to obtain the maximal negative contribution.
Thus, applying Lemma 18, we obtain the lower bound on the derivative:
w+ur\? bw vr tyre" tee (« (- (<9) ext (ââ*) ol HEY ente (ââ¢))) _ V2 vt V2 /vT (50)
( pu J2./0T
) 2)
# erfc
+ 2
>
# νÏ
2 10 ge PPbRR 2, (24 (- (clans TEE) rte (â 09+ a) _ 20.24 -0.9 2:0.24-0.9+0.01 )? 2-0.24-0.9+ 0.01 0.01 del V2V0.24.0.9 ) erfc Cao) âerfc (-aw) + 2) > 0.976952. V2V0.24-0.9 V2V0.05 - 0.9 ) | 1706.02515#87 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 88 | 1 2
For applying the mean value theorem, we require the smallest (1). We follow the proof of Lemmas] which shows that at the minimum y = jzw must be maximal and x = v7 must be minimal. Thus, the smallest â¬(,w,v,7, Ao1, 01) is â¬(0.01, 0.01, 0.05, 0.9, Ao1, @o1) = 0.0738404 for 0.05 < v and 0.9 < 7. Therefore the mean value theorem and the bound on (jz)? (Lemma|43} gives
v= E(p1,w, V,T, X01, 001) â (AH, w, YT, Ao1, ao1))? > (51) 0.0738404 + 0.976952(v â 0.05) â 0.005 = 0.0199928 + 0.976952 > 0.08330333 - 0.24 + 0.976952v > 1.060255v > v.
# A3.4 Lemmata and Other Tools Required for the Proofs
# A3.4.1 Lemmata for prooï¬ng Theorem 1 (part 1): Jacobian norm smaller than one | 1706.02515#88 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 89 | # A3.4.1 Lemmata for prooï¬ng Theorem 1 (part 1): Jacobian norm smaller than one
In this section, we show that the largest singular value of the Jacobian of the mapping g is smaller than one. Therefore, g is a contraction mapping. This is even true in a larger domain than the original â¦. We do not need to restrict Ï â [0.95, 1.1], but we can extend to Ï â [0.8, 1.25]. The range of the other variables is unchanged such that we consider the following domain throughout this section: µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.8, 1.25].
19
(50)
Jacobian of the mapping. In the following, we denote two Jacobians: (1) the Jacobian 7 of the mapping h : (u,v) +> (jt, â¬), and (2) the Jacobian H of the mapping g : (4,1) +> (fi, 7) because the influence of ji on v is small, and many properties of the system can already be seen on 7. | 1706.02515#89 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 91 | A (54) op 1 vr pw + VT pw =X Het erfe fe + 2 aad (c« erfe ( Jiu ) erfe (4 â) ) Tra(,w,v,7, 4,0) = 2 filu.w.1.7, 0) = (55) OV 1 ve, 2 22 =r | aetâ* > erfe (â + 7) â(aâ1) eo a 4 V2 /vT TUT (2) Tar(U,0,U,7,A,0) = Deter Â¥T As) = (56) wpUt +UT Mw G âet#t 2) erfe (4) + 22m 207 ong, ( Hee + =") ( ââ ( plus )) [2 = 12a? ave erfe | âââ_ } + pw | 2 â erfe + VVTe ( V2 /vT M J2/0T Tw OQ: To2(p1,W,V,7, A, a) = By Slt WoT As) = (57) 1 ¢ per ~ [ pw +r =r (« âetâ+'D) erfe (â*) + uw +5 + 2vr pu Qa? e2HY+2"7 erfc (â*) â erfe ( | 1706.02515#91 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 93 | Proof sketch: Bounding the largest singular value of the Jacobian. If the largest singular value of the Jacobian is smaller than 1, then the spectral norm of the Jacobian is smaller than 1. Then the mapping Eq. (4) and Eq. (5) of the mean and variance to the mean and variance in the next layer is contracting.
We show that the largest singular value is smaller than 1 by evaluating the function S(µ, Ï, ν, Ï, λ, α) on a grid. Then we use the Mean Value Theorem to bound the deviation of the function S between grid points. Toward this end we have to bound the gradient of S with respect to (µ, Ï, ν, Ï ). If all function values plus gradient times the deltas (differences between grid points and evaluated points) is still smaller than 1, then we have proofed that the function is below 1.
# The singular values of the 2 Ã 2 matrix
_ fan ay A= ( a2, a22 ) (68)
are
(Ven + G92)? + (aa1 â diz)? + V/(ar1 â 22)? + (a2 4 an)â) ; 5 (Ven + G99)? + (oq â 42)? â V(a11 = G22)? + (aio + anâ)
1 2 1 2
# SL
= | 1706.02515#93 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 94 | 1 2 1 2
# SL
=
s2 = . (60)
20
(59)
We used an explicit formula for the singular values [4]. We now set H11 = a11, H12 = a12, H21 = a21, H22 = a22 to obtain a formula for the largest singular value of the Jacobian depending on (µ, Ï, ν, Ï, λ, α). The formula for the largest singular value for the Jacobian is:
(Va + Hoo)? + (Har â Hi2)? + V(Har â H22)? + (Hie + Haâ)
S(µ, Ï, ν, Ï, λ, α) = (61)
1 =35 (Va + Foz â 2jtFi2)? + (Par â 20a â Fiz)? + V(Tu = Jaz + 2tTi2)? + (Sia + Jar â 2iTuay?) ;
where J are deï¬ned in Eq. (54) and we left out the dependencies on (µ, Ï, ν, Ï, λ, α) in order to keep the notation uncluttered, e.g. we wrote J11 instead of J11(µ, Ï, ν, Ï, λ, α). | 1706.02515#94 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 95 | Bounds on the derivatives of the Jacobian entries. In order to bound the gradient of the singular value, we have to bound the derivatives of the Jacobian entries J11(µ, Ï, ν, Ï, λ, α), J12(µ, Ï, ν, Ï, λ, α), J21(µ, Ï, ν, Ï, λ, α), and J22(µ, Ï, ν, Ï, λ, α) with respect to µ, Ï, ν, and Ï . The values λ and α are ï¬xed to λ01 and α01. The 16 derivatives of the 4 Jacobian entries with respect to the 4 variables are:
; 2(a-1) 2,2 wwtvr)2 =(a OAu _ by? _# usury? (â**) - V2 (62) ee wr ae 27 Ou JQ /vT SVT | 1706.02515#95 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 96 | _ by? _# (â**) - ee wr ae 27 Ou JQ /vT SVT 2 a 1 a2 [ /Fla- Dew puter)? OSs = =) (-e ee | VE a + yeâ rte (⢠+ur Ow 2 UT V2/ur 2. /vT (am) ) erfc Vv2Vur OI _ 1 awa? (uwpury?® oe (po tut) v2 (a-l)pw a oy qrTwe (~ ete (Me) bale (rps? Vit OF _ \ywen ee (ae te ente (HEAT) | 2((a-l)w a Or 4 : â Vv2yor) | Va \ (vr)3/? JT Of2 _ OA Ou Ov OPi2 _ x â weet (usbvr)? fe ( He tur) | v2 (a â 1)pw a Fo 7 glee ae erfc Viger) Va nse Vit 2 we? pwr Oz â 1),-32 (arte" ve 2 erfe Ga + ) Ov 8 J2vT 2 ((-I(a-Dprw | Vr(at+apw-1)â ar3/? T v)2\/F ' p3/2 Vv Oz Lo Pw? ( | 1706.02515#96 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 99 | gg (uwt2ur)? 2a? ud + QUT Qare ânr ea erfe ( + âerfe V2Vvr Oa =. Gi +1) (- Je we ene (Me Ow a(Quw +1)e 2 e~ = erfe (uwt2v7)2 pw? Ga + 2Qur V2 vt s(n) «Bem OJa yg Pu? ( 2 ( âa (4) = â}\ 0 Dur âe Qur rfc + Ov 3 TWE a e⬠er) Vivir doze 6 fe (SS) ,; Vente -1) OFa1 = 1 ewe (« (<8) ext â() * or 2 V2 /0T 2 dodo enfe (4 + =) ; V2(-1) )(e?-1 ae yi OP22, _ OTn Ou Ov OSa2 1 aire (« (-") erfe (â + ) 4 Ow 2 V2/uT dade enfe 2 (mee) | Vey OFa2 = 12, 26- co (« (-*) erfc (â*) + Ov 4 V2 sur 802 (wusbave)? erfe pw + 2vT v2 (a? â 1) pw are ur t - V2VvT | 1706.02515#99 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 101 | + 2
+
# 3α2 â νÏ
# eâ µ2 Ï2
# 2Î½Ï erfc
# ( pw V2
# νÏ
# (pw + QUT | âââ V2/vT
â
))
+
Lemma 5 (Bounds on the Derivatives). The following bounds on the absolute values of the deriva- tives of the Jacobian entries J11(µ, Ï, ν, Ï, λ, α), J12(µ, Ï, ν, Ï, λ, α), J21(µ, Ï, ν, Ï, λ, α), and J22(µ, Ï, ν, Ï, λ, α) with respect to µ, Ï, ν, and Ï hold:
OF Ou OF Ow <_0.0031049101995398316 (63) <_ 1.055872374194189
22
+
(63)
OF <_0.031242911235461816 Ov a oF < 0.03749149348255419
a oie < 0.031242911235461816 (2a Siz < 0.031242911235461816 Ow a oie < 0.21232788238624354 os < 0.2124377655377270 | 1706.02515#101 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 102 | a oF < 0.02220441024325437 (2a a Ja) â 1.146955401845684 Ow a oF < 0.14983446469110305 992] â 9.17980135762932363 Or
a 222) â 9 44983446469110305 Ou a S22) â 9 44983446469110305 Ow a 222) â 1 395740052651535 Ov OSes < 2.396685907216327
Proof. See proof 39.
Bounds on the entries of the Jacobian. Lemma 6 (Bound on J11). The absolute value of the function Ju = gdw (acme erfc (43) âerfe (4) + 2) is bounded by |Ji1| < 0.104497 in the domain -0.1 <u < 0.1, -0.1 ¢<w <0.L08 cv < 1.5, and0.8 <7 < 1.25 fora = a1 and X = Xo.
Proof. | 1706.02515#102 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 103 | Proof.
1 ort og (Hw+tvr jus Ju| = |=Aw | aet"â+ ® erfe ( ) + 2 â erfe ( )) \Aa| F ( V2 /vT V2 /vT 1 < [5 ||Allvl (Jal0.587622 + 1.00584) < 0.104497,
23
where we used that (a) Ji; is strictly monotonically increasing in jw and |2 â erfc ( 9.01 ) |< V2V0T 1.00584 and (b) Lemmal47}hat jet +> erfe (4) | < O14 erfe (.giess) = 0.587622
Lemma 7 (Bound on J12). The absolute value of the function 2 ww? Jo = $Ar (acme erfc (44) â(aâ1) ae) is bounded by |J12| < 0.194145 in the domain -0.1< w<0.1,-0.1<w <0.1L0.8<y < 1.5, and0.8 <7 < 1.25 fora = api and X = Xo.
Proof. | 1706.02515#103 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 104 | Proof.
|Ji2| < dale actâ+ > erfe Me Fer (a â 1) 2 se < BIS | : V2 /vT TUT â 1 qAlll |0.983247 â 0.392294| < 0.194035
V2/ur 2? . . . woe the second term 0.582677 < 4/ oe âx= < 0.997356, which can easily be seen by maximizing or minimizing the arguments of the exponential or the square root function. The first term scaled by a is 0.727780 < aeât © erfe (44) < 0.983247 and the second term scaled by a â 1 is (24,2 0.392294 < (a â 1),/ ae ne < 0.671484. Therefore, the absolute difference between these terms is at most 0.983247 â 0.392294 leading to the derived bound. For the first term we have 0.434947 < e#â+F erfe (424) < 0.587622 after Lemmal#7}and for | 1706.02515#104 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 105 | Bounds on mean, variance and second moment. For deriving bounds on ˵, Ëξ, and Ëν, we need the following lemma. Lemma 8 (Derivatives of the Mapping). We assume α = α01 and λ = λ01. We restrict the range of the variables to the domain µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.8, 1.25].
# The derivative â
The derivative â ⵠ˵(µ, Ï, ν, Ï, λ, α) has the sign of Ï. The derivative â âν ˵(µ, Ï, ν, Ï, λ, α) is positive. The derivative â âµ Ëξ(µ, Ï, ν, Ï, λ, α) has the sign of Ï. The derivative â âν Ëξ(µ, Ï, ν, Ï, λ, α) is positive. | 1706.02515#105 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 107 | Proof. We use Lemmal§|which states that with given sign the derivatives of the mapping Eq. (4) and Eq. (5) with respect to v and y are either positive or have the sign of w. Therefore with given sign of w the mappings are strict monotonic and the their maxima and minima are found at the borders. The minimum of {i is obtained at zw = â0.01 and its maximum at jw = 0.01 and o and 7 at minimal or maximal values, respectively. It follows that â0.041160 < fi(â0.1, 0.1, 0.8, 0.8, Ao1, 201) <f < fa(0.1, 0.1, 1.5, 1.25, Aor, ao1) < 0.087653. (66)
24
(65)
Similarly, the maximum and minimum of Ëξ is obtained at the values mentioned above:
0.703257 < â¬(â0.1, 0.1, 0.8, 0.8, Aor, 01) <E < E(0.1, 0.1, 1.5, 1.25, Aor, 01) < 1.643705. (67)
Hence we obtain the following bounds on Ëν: | 1706.02515#107 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 108 | Hence we obtain the following bounds on Ëν:
0.703257 â ˵2 < Ëξ â ˵2 < 1.643705 â ˵2 0.703257 â 0.007683 < Ëν < 1.643705 â 0.007682 0.695574 < Ëν < 1.636023. (68)
Upper Bounds on the Largest Singular Value of the Jacobian. Lemma 10 (Upper Bounds on Absolute Derivatives of Largest Singular Value). We set α = α01 and λ = λ01 and restrict the range of the variables to µ â [µmin, µmax] = [â0.1, 0.1], Ï â [Ïmin, Ïmax] = [â0.1, 0.1], ν â [νmin, νmax] = [0.8, 1.5], and Ï â [Ïmin, Ïmax] = [0.8, 1.25].
The absolute values of derivatives of the largest singular value S(µ, Ï, ν, Ï, λ, α) given in Eq. (61) with respect to (µ, Ï, ν, Ï ) are bounded as follows: | 1706.02515#108 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 109 | # âS âµ âS âÏ âS âν âS âÏ
< 0.32112 , (69)
< 2.63690 , (70)
< 2.28242 , (71)
< 2.98610 . (72)
Proof. The Jacobian of our mapping Eq. (4) and Eq. (5) is deï¬ned as
â~ (Hu He )\_ Su Tia H= ( Hoi Hee ) ~ ( Ja â 26tTi1 P22 â 22 ) (73)
and has the largest singular value
S(u,w,u,7,A,a) = 5 (Ven Hoo)? + (Haz + Hai)? + V(Hi + Hea)? 4 (ia â Hai)â) (74)
according to the formula of Blinn [4].
# We obtain | Os OH
| Os 1 Hi â Hoe Hi + H22 ~|\< OH VJ (Hur â Ho2)? + (Hie + Ha)? (Har + Haz)? + (Hai â Hie)? (75)
1 141 t < =1 (HiztHa1)? | (HarâHiz)? 4 4 2 (HuâHa2)? | (Har FHa22)?
and analogously | 1706.02515#109 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 110 | 1 141 t < =1 (HiztHa1)? | (HarâHiz)? 4 4 2 (HuâHa2)? | (Har FHa22)?
and analogously
| Os 1 Hiz + Hai _ Ha â Haz <1 OHA2 2 \ Jit â Hoe)? + (Hie + Har)? (Har + Hoa)? + (Har â Hi2)? (76)
|
25
,
and
| Os 1 Hoi â Hi2 Haz + Har = |5 <4 <1 OH21 2 \ /(Hir + Haz)? + (Hor â Haz)? \/(Haa â Ho2)? + (Hi2 + Hai)? (77)
and
| os 1 Hii + Ho Hi â Ho = _â <1. OH22 2 \ /(Hir + Haz)? + (Hor â Haz)? \/(Ha â Ho2)? + (Hi2 + Hai)? (78)
We have
# âS âµ âS âÏ âS âν âS âÏ
âS âH11 âS âH11 âS âH11 âS âH11
âH11 âµ âH11 âÏ âH11 âν âH11 âÏ | 1706.02515#110 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 111 | âH11 âµ âH11 âÏ âH11 âν âH11 âÏ
âS âH12 âS âH12 âS âH12 âS âH12
âH12 âµ âH12 âÏ âH12 âν âH12 âÏ
âS âH21 âS âH21 âS âH21 âS âH21
âH21 âµ âH21 âÏ âH21 âν âH21 âÏ
âS âH22 âS âH22 âS âH22 âS âH22
âH22 âµ âH22 âÏ âH22 âν âH22 âÏ
= + + + (79)
= + + + (80)
= + + + (81)
= + + + (82)
(83)
from which follows using the bounds from Lemma 5: | 1706.02515#111 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 112 | = + + + (81)
= + + + (82)
(83)
from which follows using the bounds from Lemma 5:
Derivative of the singular value w.r.t. ju: os (84) Ou OS ||OHu| | OS ||OHi2 AS ||OHal | OS ||AH2x» OH Ou "| OHa2 Ou "| OHo1 Ou "| OH» Ou OHu| , |AHi2| , |OHar OH22 ou | | Ou | | On Ou OF OSi2| | |OFo1 â 2nur| , |OSo2 â 2Ti2) â Ou On | Ou Ou > OF OPi2 OPar OF22 OA | | ~ 2 OAi2| | ~ t t +t t t t < Ou | Ou | Ou Ou 2 Ou Wil +2 |u| +2 Ou (| + 2| ial |Fan| 0.0031049101995398316 + 0.031242911235461816 + 0.02220441024325437 + 0.14983446469110305+ 2- 0.104497 - 0.087653 + 2 - 0.104497?+ 2 - 0.194035 - 0.087653 + 2 - 0.104497 - 0.194035 < 0.32112, | 1706.02515#112 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 113 | where we used the results from the lemmata 5, 6, 7, and 9.
Derivative of the singular value w.r.t. w: os ao| < OS ||AHu| | OS ||OH.2 AS ||OHal | OS ||AH2x» OH || Ow + aos dw | |OHa|| dw + ri Ow Hu] | |OMa2| _, |OHa| _ |PH22| dw | | dw | | dw dw | ~ Ofua| ee ; ee ; [ee < dw | | dw | Ow Ow ~
(85)
26
OJu1| ,|OA2| , | Oa OJ22| , 9 OF lal + 2|Tul Oft| , dw | | dw | | dw Ow | | Ow Blt MBG] | OAi2| | ~ Oj 2 Ow |jt| + 2| Ars] ao < (86)
2.38392 + 2 · 1.055872374194189 · 0.087653 + 2 · 0.1044972 + 2 · 0.031242911235461816 · 0.087653 + 2 · 0.194035 · 0.104497 < 2.63690 , | 1706.02515#113 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 114 | where we used the results from the lemmata 5, 6, 7, and 9 and that ˵ is symmetric for µ, Ï.
Derivative of the singular value w.r.t. v: os ay < (87) aS ||OHu| , | aS ||| | OS ||OHa| | | OS | | Ha» OH || Ov | |OHi2|| Av |â |OHa|| Ov | * |OHs2|] dv OHA OHi2 OH21 OH22 < ov | | av ov | | av | ~ OFu ae ; | â Fr ; | â 2nFi2 < Ov Ov Ov Ov ~ OSs ee | Oat | OSes . 2|°oe |i] + 2|Fir| | Fiz| +2 Oia || +2|Tis|? < 2.19916 + 2 - 0.031242911235461816 - 0.087653 + 2 - 0.104497 - 0.194035+ 2 - 0.21232788238624354 - 0.087653 + 2- 0.194035? < 2.28242 ;
where we used the results from the lemmata 5, 6, 7, and 9.
Derivative of the singular value w.r.t. Ï : | 1706.02515#114 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 115 | where we used the results from the lemmata 5, 6, 7, and 9.
Derivative of the singular value w.r.t. Ï :
os ar| < (88) OS ||OHu OS ||OHi2 OS ||OHa1| | OS ||OH22 OHu|| Or | |AHie2|| Ar |" |AHal| ar | | | ss ar OHi1| | |OHi2| , |OH21| | | OH22 < Or || ar | | dr | | ar | * OF Ofi2| , | = 2p | â 262 < Or Or |- Or Or ~ OS OD2 OJa1 OJ22 OFu|~ Oj Or Or | Or | Or Or Vel + 21 ual Or 2 OSs l#| + 2| Fra oh < (89)
2.82643 + 2 · 0.03749149348255419 · 0.087653 + 2 · 0.104497 · 0.194035+ 2 · 0.2124377655377270 · 0.087653 + 2 · 0.1940352 < 2.98610 , where we used the results from the lemmata 5, 6, 7, and 9 and that ˵ is symmetric for ν, Ï . | 1706.02515#115 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 116 | Lemma 11 (Mean Value Theorem Bound on Deviation from Largest Singular Value). We set α = α01 and λ = λ01 and restrict the range of the variables to µ â [µmin, µmax] = [â0.1, 0.1], Ï â [Ïmin, Ïmax] = [â0.1, 0.1], ν â [νmin, νmax] = [0.8, 1.5], and Ï â [Ïmin, Ïmax] = [0.8, 1.25]. The distance of the singular value at S(µ, Ï, ν, Ï, λ01, α01) and that at S(µ + âµ, Ï + âÏ, ν + âν, Ï + âÏ, λ01, α01) is bounded as follows:
|S(µ + âµ, Ï + âÏ, ν + âν, Ï + âÏ, λ01, α01) â S(µ, Ï, ν, Ï, λ01, α01)| <
27 | 1706.02515#116 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 118 | from which immediately follows that
S(ut+ Ap,w + Aw,v + Av,r + Ar, X01, 001) â S(u,w,v,7, A01, 001)| < (92) 8 (w+ tAp,w + tAw,v + tdv,r + tAr,ro1,001)| [Apel + i os 9) (w+ tAp,w + tAw,v + tdv,r + tAr, ro1,001)| |Aw| + Ow os 3 (w+ tAp,w + tAw,v + tAv,r + tAr, ro1,001)| [Av] + Vv os 5 (w+ tAp,w + tAw,v + tAdv,r + tAr,ro1,001)} |Az| . 7
We now apply Lemma 10 which gives bounds on the derivatives, which immediately gives the statement of the lemma.
Lemma 12 (Largest Singular Value Smaller Than One). We set α = α01 and λ = λ01 and restrict the range of the variables to µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.8, 1.25]. | 1706.02515#118 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 119 | The the largest singular value of the Jacobian is smaller than 1:
S(µ, Ï, ν, Ï, λ01, α01) < 1 . (93)
Therefore the mapping Eq. (4) and Eq. (5) is a contraction mapping.
Proof. We set âµ = 0.0068097371, âÏ = 0.0008292885, âν = 0.0009580840, and âÏ = 0.0007323095.
According to Lemma 11 we have
|S(µ + âµ, Ï + âÏ, ν + âν, Ï + âÏ, λ01, α01) â S(µ, Ï, ν, Ï, λ01, α01)| < 0.32112 · 0.0068097371 + 2.63690 · 0.0008292885+ 2.28242 · 0.0009580840 + 2.98610 · 0.0007323095 < 0.008747 . | 1706.02515#119 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 120 | For a grid with grid length âµ = 0.0068097371, âÏ = 0.0008292885, âν = 0.0009580840, and âÏ = 0.0007323095, we evaluated the function Eq. (61) for the largest singular value in the domain µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.8, 1.25]. We did this using a computer. According to Subsection A3.4.5 the precision if regarding error propagation and precision of the implemented functions is larger than 10â13. We performed the evaluation on different operating systems and different hardware architectures including CPUs and GPUs. In all cases the function Eq. (61) for the largest singular value of the Jacobian is bounded by 0.9912524171058772.
We obtain from Eq. (94):
S(wt Ap,w + Aw,yv + Av,7 + At, Aoi, 01) < 0.9912524171058772 + 0.008747 < 1. (95)
28
(94)
# A3.4.2 Lemmata for prooï¬ng Theorem 1 (part 2): Mapping within domain | 1706.02515#120 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 121 | 28
(94)
# A3.4.2 Lemmata for prooï¬ng Theorem 1 (part 2): Mapping within domain
We further have to investigate whether the the mapping Eq. (4) and Eq. (5) maps into a predeï¬ned domains. Lemma 13 (Mapping into the domain). The mapping Eq. (4) and Eq. (5) map for α = α01 and λ = λ01 into the domain µ â [â0.03106, 0.06773] and ν â [0.80009, 1.48617] with Ï â [â0.1, 0.1] and Ï â [0.95, 1.1].
Proof. We use Lemma 8 which states that with given sign the derivatives of the mapping Eq. (4) and Eq. (5) with respect to α = α01 and λ = λ01 are either positive or have the sign of Ï. Therefore with given sign of Ï the mappings are strict monotonic and the their maxima and minima are found at the borders. The minimum of ˵ is obtained at ÂµÏ = â0.01 and its maximum at ÂµÏ = 0.01 and Ï and Ï at their minimal and maximal values, respectively. It follows that: | 1706.02515#121 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 122 | â0.03106 < ji(â0.1, 0.1, 0.8, 0.95, Ao1, a01) <f < fu(0.1, 0.1, 1.5, 1.1, Ao1, ao1) < 0.06773, (96)
and that ˵ â [â0.1, 0.1]. Similarly, the maximum and minimum of Ëξ( is obtained at the values mentioned above:
0.80467 < â¬(â0.1, 0.1, 0.8, 0.95, Ao1, 01) <E < E(0.1, 0.1, 1.5, 1.1, Ag, a1) < 1.48617. (97)
Since | Ëξ â Ëν| = |˵2| < 0.004597, we can conclude that 0.80009 < Ëν < 1.48617 and the variance remains in [0.8, 1.5]. | 1706.02515#122 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 123 | Corollary 14. The image g(9â) of the mapping g : (u,v) + (jt,%) (Eq. B)) and the domain Y = {(p,v)|-0.1 <p < 0.1,0.8 <p < 1.5} is a subset of O':
gM) oe, (98)
for all Ï â [â0.1, 0.1] and Ï â [0.95, 1.1].
Proof. Directly follows from Lemma 13.
# A3.4.3 Lemmata for prooï¬ng Theorem 2: The variance is contracting
Main Sub-Function. We consider the main sub-function of the derivate of second moment, J22 (Eq. (54)):
26 = ty, (-creer erfc (â + 7) + 207620 42"7 erfe (4 + â) â erfe ( aad ) + 2) ave 2 Vijir Viyir Vi jir (99)
that depends on ÂµÏ and Î½Ï , therefore we set x = Î½Ï and y = µÏ. Algebraic reformulations provide the formula in the following form: | 1706.02515#123 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 124 | that depends on ÂµÏ and Î½Ï , therefore we set x = Î½Ï and y = µÏ. Algebraic reformulations provide the formula in the following form:
Oz 1,9 2(_-# jen? (yt@\ 4, Grew? (y+ Qa . y ; ays =r (a ( e \(¢ ente (T=) 2e este (=) arte (=) +2)
For A = Ao and a = ao , we consider the domain -1 <u < 1,-0.1 <w <01,15<v< 16, and, 0.8 <7 < 1.25. For x and y we obtain: 0.8-1.5 = 1.2 <2 < 20=1.25-16and0.1-(â1) =-0.1l<y<01= 0.1 - 1. In the following we assume to remain within this domain.
29 | 1706.02515#124 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 125 | f(1.2,y)
# y
# Q
(x+y)2 2x
Q Figure A3: Left panel: Graphs of the main subfunction f(x,y) = ee erfe (#4) - (22+y)â ot . oo. . . . . 2eâ â erfe ( 2ery ) treated in Lenina The function is negative and monotonically increasing Vv2Va with x independent of y. Right panel: Graphs of the main subfunction at minimal x = 1.2. The graph shows that the function f (1.2, y) is strictly monotonically decreasing in y.
Lemma 15 (Main subfunction). For 1.2 <x < 20 andâ-0.1 < y < 0.1,
the function
x+y)? ety)? 2. eo enfe (F) â 26 erfe (4) (101)
is smaller than zero, is strictly monotonically increasing in x, and strictly monotonically decreasing in y for the minimal x = 12/10 = 1.2.
Proof. See proof 44. | 1706.02515#125 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 126 | Proof. See proof 44.
The graph of the subfunction in the specified domain is displayed in Figure[A3| Theorem 16 (Contraction v-mapping). The mapping of the variance (p,w,v,T,,@) given in Eq. is contracting for X = Aoi, & = agi and the domain Qt: -0.1< w<0.1 -01<w<0.1 15<v< 16, and0.8 < Tr < 1.25, that is,
<1. (102) | oun, w,V,T, Ao1, a1)
Proof. In this domain â¦+ we have the following three properties (see further below): â âν ˵ > 0, and â
Oo -| ae < <1 (103) ln avâ Oz 5.0. ln - hay
Ëξ < 1 in an even larger domain that fully contains â¦+. According to
⢠We ï¬rst proof that â âν Eq. (54), the derivative of the mapping Eq. (5) with respect to the variance ν is | 1706.02515#126 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 127 | ⢠We ï¬rst proof that â âν Eq. (54), the derivative of the mapping Eq. (5) with respect to the variance ν is
Ox Bp S(t Ws 4s T Aor 101) = (104) 1). 2 (pot YE [we bur sr («a ( e ) erfe âJa JaF = + + 2vt pu Qa? ePHot2U7 orfc (â*) â erfc ( ) + 2) . Jur V2 vt
30
For \ = Ani, a= a01, -l<uw<l,-01<w<0115<¢y < 16,and0.8 <7 < 1.25, we first show that the derivative is positive and then upper bound it.
According to Lemmal|I5] the expression (uwtur)? pis + UT on ane (Me)
(uwtur)? pis + UT 9 uuet2u7)2 exfe (â + =) (105) on ane (Me) - â Vie
is negative. This expression multiplied by positive factors is subtracted in the derivative Eq. (104), therefore, the whole term is positive. The remaining term
2 â erfc Î½Ï (106) | 1706.02515#127 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 129 | 1.5 2 cop UE ~ [pw + UT prot (<4 (-e +3 ) erfe Gas + (107) Qa 22? *2"7 erfe (â) â erfe (+) + 2) = shar (<8 (-- =) (se erfc (â*) - oe erfc (â*)) â erfc () + 2) < pian (a8 (8) (ee ee (MEO) - ys 9 (uw ave)? f (â 2) f ( pw ) 42 evr erfe | ââââ | } - erfc Jur Jur 1 : ratory? 1240.1 =1.2531 (<4 («! ina) erfe (5) â 2 v2Vv12 > 2- =+*)) ( | ( pu ) ) Qe\ v2vI2/ erfe ( âââ_â âe w= }| âerfe +2) < ( V2V1.2 V2 fur * 1 1.240.1)? 1.2+0.1 *1.95)2 (e008 (« Lets) erfc C3) _ 01 o1 V2V1.2 *12)) oe ha) « 1 120.1)? 1.2+0.1 =1.2531 (-e%08, (« ana) | 1706.02515#129 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 131 | We explain the chain of inequalities:
â First equality brings the expression into a shape where we can apply Lemma 15 for the the function Eq. (101).
â First inequality: The overall factor Ï is bounded by 1.25. â Second inequality: We apply Lemma 15. According to Lemma 15 the function Eq. (101) is negative. The largest contribution is to subtract the most negative value of the function Eq. (101), that is, the minimum of function Eq. (101). According to Lemma 15 the function Eq. (101) is strictly monotonically increasing in x and strictly monotonically decreasing in y for x = 1.2. Therefore the function Eq. (101) has its minimum at minimal x = Î½Ï = 1.5 · 0.8 = 1.2 and maximal y = ÂµÏ = 1.0 · 0.1 = 0.1. We insert these values into the expression.
31
(107)
â Third inequality: We use for the whole expression the maximal factor eâ µ2Ï2
2Î½Ï < 1 by setting this factor to 1. | 1706.02515#131 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 132 | 31
(107)
â Third inequality: We use for the whole expression the maximal factor eâ µ2Ï2
2Î½Ï < 1 by setting this factor to 1.
â Fourth inequality: erfc is strictly monotonically decreasing. Therefore we maximize its argument to obtain the least value which is subtracted. We use the minimal x = Î½Ï = 1.5 · 0.8 = 1.2 and the maximal y = ÂµÏ = 1.0 · 0.1 = 0.1.
# â Sixth inequality: evaluation of the terms.
⢠We now show that ˵ > 0. The expression ˵(µ, Ï, ν, Ï ) (Eq. (4)) is strictly monoton- ically increasing im ÂµÏ and Î½Ï . Therefore, the minimal value in â¦+ is obtained at ˵(0.01, 0.01, 1.5, 0.8) = 0.008293 > 0.
⢠Last we show that â âν ˵ > 0. The expression â can we reformulated as follows: âν ˵(µ, Ï, ν, Ï ) = J12(µ, Ï, ν, Ï ) (Eq. (54)) | 1706.02515#132 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 133 | (µÏ+Î½Ï )2 2Î½Ï Î»Ï eâ µ2Ï2 2(αâ1) â Î½Ï â erfc â 4 Ïαe 2Î½Ï J12(µ, Ï, ν, Ï, λ, α) = (108)
# ncaa (
â
â
is larger than is larger than zero when the term zero. This term obtains its minimal value at ÂµÏ = 0.01 and Î½Ï = 16 · 1.25, which can easily be shown using the Abramowitz bounds (Lemma 22) and evaluates to 0.16, therefore J12 > 0 in â¦+.
# A3.4.4 Lemmata for prooï¬ng Theorem 3: The variance is expanding | 1706.02515#133 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 134 | # A3.4.4 Lemmata for prooï¬ng Theorem 3: The variance is expanding
Main Sub-Function From Below. We consider functions in pw and v7, therefore we set x = pw and y = vr. For A = Xo1 and a = ao1, we consider the domain â0.1 < pw < 0.1, â0.1 < w < 0.1 0.00875 < vy < 0.7, and 0.8 < 7 < 1.25. For x and y we obtain: 0.8 - 0.00875 = 0.007 < x < 0.875 = 1.25-0.7 and 0.1-(â0.1) = â0.01 < y < 0.01 = 0.1 - 0.1. In the following we assume eto be within this domain.
In this domain, we consider the main sub-function of the derivate of second moment in the next layer, J22 (Eq. (54): O- 1 2
O- 1 urn , ; 2 . S¢ = =r (-crers erfc (<*) + 2076242" erfe () â erfc ( V2 /vT J2/vT Vir (109) | 1706.02515#134 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 135 | that depends on ÂµÏ and Î½Ï , therefore we set x = Î½Ï and y = µÏ. Algebraic reformulations provide the formula in the following form:
0: = 110 Ov (110) (8) ($8) a oe (8) a (gh) 2 V2/x Jr Lemma 17 (Main subfunction Below). For 0.007 < x < 0.875 and â0.01 < y < 0.01, the function
rw? â, (at+y (22+)? i) e 2 erfe | â-â ] â2e° =~ erfe | ââ 111 (5:2) - (St my
smaller than zero, is strictly monotonically increasing in x and strictly monotonically increasing in y for the minimal x = 0.007 = 0.00875 · 0.8, x = 0.56 = 0.7 · 0.8, x = 0.128 = 0.16 · 0.8, and x = 0.216 = 0.24 · 0.9 (lower bound of 0.9 on Ï ).
32 | 1706.02515#135 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 136 | 32
Proof. See proof|45] Lemma 18 (Monotone Derivative). For 4 = 01, @ = a1 and the domain â0.1 < w < 0.1, â0.1 <w < 0.1, 0.00875 < v < 0.7, and 0.8 < T < 1.25. We are interested of the derivative of
(484 2 [pwtur wot dur)? juw + 2vT 5 vat) re(! ) -2el Be) we(Sr)) . 112 r(e erfc Visor e erfc ViJur (112)
The derivative of the equation above with respect to
⢠ν is larger than zero;
e 7 is smaller than zero for maximal v = 0.7, v = 0.16, and v = 0.24 (with 0.9 < T);
⢠y = ÂµÏ is larger than zero for Î½Ï = 0.008750.8 = 0.007, Î½Ï = 0.70.8 = 0.56, Î½Ï = 0.160.8 = 0.128, and Î½Ï = 0.24 · 0.9 = 0.216.
Proof. See proof 46.
# A3.4.5 Computer-assisted proof details for main Lemma 12 in Section A3.4.1. | 1706.02515#136 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 137 | Proof. See proof 46.
# A3.4.5 Computer-assisted proof details for main Lemma 12 in Section A3.4.1.
Error Analysis. We investigate the error propagation for the singular value (Eq. (61) if the function arguments jy, w, 1,7 suffer from numerical imprecisions up to e. To this end, we first derive error propagation rules based on the mean value theorem and then we apply these rules to the formula for the singular value. Lemma 19 (Mean value theorem). For a real-valued function f which is differentiable in the closed interval a, b], there exists t ⬠(0, 1] with
f (a) â f (b) = âf (a + t(b â a)) · (a â b) . (113)
It follows that for computation with error âx, there exists a t â [0, 1] with
[fl@+ Aa) â f(x)| < ||Vf(@+tAx)|| Aa] . (114)
Therefore the increase of the norm of the error after applying function f is bounded by the norm of the gradient ||V f(a + tAa)|l.
We now compute for the functions, that we consider their gradient and its 2-norm:
addition: | 1706.02515#137 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 138 | We now compute for the functions, that we consider their gradient and its 2-norm:
addition:
addition: f(a) =x, + x and Vf (a) = (1,1), which gives ||V f(a)|| = V2. We further know that |f(@+ Aw) â f(x)| = |ar +a + Av, + Avg â 2 â 2] < |Axi| + |Axo| . (115)
Adding n terms gives:
n n n So ai + An; - Soa < So Aa: < n|Aril nas « (116) i=1 i=l i=l
subtraction:
â
f(x) = a1 â 2 and V f(x) = (1,1), which gives ||V f(x)|| = V2. We further know that |f(w + Aa) â f(x) = |x) â 22 + Ary â Arg â 2 4+ 22| < [Axi] + |Aro| - (117)
Subtracting n terms gives:
n n So =(#i + Axi) + Son < So Asi < n|Azilnax + (118) i=1 i=l i=1
33
multiplication: | 1706.02515#138 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 140 | e division: f(z) = 2 and Vf (x) = (4,-3). which gives ||V f(a)|| = tl. We further know that a+Ar, v4 (a1 + Axy)a â 21 (x2 + Ara) + A - If(w + Aw) â f(x)| ta+Arg 22 (xo + Axe)x2 (121) An wg â Arg: 21 Ax, _ Ara r , o(A2) ; x3 + Axo - x2 XQ x3 @ square root: f(a) = Vand f'(z) = Eee which gives | fâ(x)| = we © exponential function: f(x) = exp(x) and fâ(x) = exp(zx), which gives | fâ(x)| = exp(z). e error function: f(a) =erf(x) and fâ(x) = ae exp(â2?), which gives | fâ(2)| = ae exp(â2?). e complementary error function: f(x) = erfe(x) and fâ(x) = -z exp(â2â), which gives | fâ(x)| = Fa exp(â# 2). | 1706.02515#140 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 141 | Lemma 20. /f the values j1,w,v,7 have a precision of ¢, the singular value (Eq. (61p) evaluated with the formulas given in Eq. 4) and Eq. (61) has a precision better than 292¢.
This means for a machine with a typical precision of 2~°? = 2.220446 - 10-16, we have the rounding error ⬠© 10~1%, the evaluation of the singular value (Eq. (61)) with the formulas given in Eq. and Eq. (61) (61) has a precision better than 10-18 > 292e.
Proof. We have the numerical precision ⬠of the parameters f1,w,v,7, that we denote by Ap, Aw, Av, Ar together with our domain 2.
With the error propagation rules that we derived in Subsection A3.4.5, we can obtain bounds for the numerical errors on the following simple expressions: | 1706.02515#141 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 143 | â
34
(122)
A (v2) < A < x A (V2Ver) < V2A (Vor) + urd (V2) < V2-1875⬠+ 1.5-1.25- i < 3.5¢ (+) < (A (qs) V2V0F + |e A (v2vo7)) â+â, as < 0.2eV/2V0.64 + 0.01 - 3.5e) au < 0.25¢ A (â*) < (4 (ue + v7) V2V07 + |p + vT| A (v2ver)) wa < (3.2«v2V0.64 + 1.885 - 3.5¢) < 8¢. 2- 0.64
Using these bounds on the simple expressions, we can now calculate bounds on the numerical errors of compound expressions: | 1706.02515#143 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 144 | Using these bounds on the simple expressions, we can now calculate bounds on the numerical errors of compound expressions:
. plus 2 -(+4 yâ (4 pu )< A | erfe < ee \Vever] A (123) ( (=) Vr v2.07 2 yl d5e < 0.3¢ . (pw tr 2 ~(4g4z)â (ââ*) A fc < ee \Yev) A (| ââ] < 124 (ex â Ca J2/vT )) Vir J2vT (124) 2 var < 10e A (ehh) < (eM) A (MTT) < (125) 99479 De < 5.7⬠(126)
Subsequently, we can use the above results to get bounds for the numerical errors on the Jacobian entries (Eq. (54)), applying the rules from Subsection A3.4.5 again:
_ 1 wo (*) (4 pow )3 )) . A(Aiu) A (S (ce erfc Vive erfe jt 2 <6¢e, (127) | 1706.02515#144 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 145 | _ 1 wo (*) (4 pow )3 )) . A(Aiu) A (S (ce erfc Vive erfe jt 2 <6¢e, (127)
and we obtain A (Ji2) < 78¢, A (Jar) < 189¢, A (Jo2) < 405¢ and A (ji) < 52â¬. We also have bounds on the absolute values on Jj; and ji (see Lemma|6| Lemma}7| and Lemma)9), therefore we can propagate the error also through the function that calculates the singular value (Eq. (61).
A(S(u,w,V,7,A,0)) = (128) a(3 (Va + Jaz â 2ftFi2)? + (Joi â 2ftAir â Fiz)? + JV (Tu â Far + 2jt ia)? + (Tia + Tar = 2%iFu)?) ) < 292e. | 1706.02515#145 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 146 | Precision of Implementations. We will show that our computations are correct up to 3 ulps. For our implementation in GNU C library and the hardware architectures that we used, the precision of all mathematical functions that we used is at least one ulp. The term âulpâ (acronym for âunit in the last placeâ) was coined by W. Kahan in 1960. It is the highest precision (up to some factor smaller 1), which can be achieved for the given hardware and ï¬oating point representation.
Kahan deï¬ned ulp as [21]:
35
âUlp(x) is the gap between the two ï¬nite ï¬oating-point numbers nearest x, even if x is one of them. (But ulp(NaN) is NaN.)â
Harrison deï¬ned ulp as [15]:
âan ulp in x is the distance between the two closest straddling floating point numbers a and 8, i.e. those with a < x < band a Â¥ b assuming an unbounded exponent range.â
In the literature we ï¬nd also slightly different deï¬nitions [29].
According to [29] who refers to [11]: | 1706.02515#146 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 147 | In the literature we ï¬nd also slightly different deï¬nitions [29].
According to [29] who refers to [11]:
âTEEE-754 mandates four standard rounding modes:â âRound-to-nearest: r(x) is the floating-point value closest to x with the usual distance; if two floating-point value are equally close to x, then r(x) is the one whose least significant bit is equal to zero.â âTEEE-754 standardises 5 operations: addition (which we shall note © in order to distinguish it from the operation over the reals), subtraction (©), multiplication (®), division (@), and also square root.â âTEEE-754 specifies em exact rounding [Goldberg, 1991, §1.5]: the result of a floating-point operation is the same as if the operation were performed on the real numbers with the given inputs, then rounded according to the rules in the preceding section. Thus, x @ y is defined as r(x + y), with x and y taken as elements of RU {-00, +00}; the same applies for the other operators.â
Consequently, the IEEE-754 standard guarantees that addition, subtraction, multiplication, division, and squared root is precise up to one ulp. | 1706.02515#147 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 148 | Consequently, the IEEE-754 standard guarantees that addition, subtraction, multiplication, division, and squared root is precise up to one ulp.
We have to consider transcendental functions. First the is the exponential function, and then the complementary error function erfc(x), which can be computed via the error function erf(x).
Intel states [29]:
âWith the Intel486 processor and Intel 387 math coprocessor, the worst- case, transcendental function error is typically 3 or 3.5 ulps, but is some- times as large as 4.5 ulps.â
According //man.openbsd.org/OpenBSD-current/man3/exp.3: to https://www.mirbsd.org/htman/i386/man3/exp.htm and http:
âexp(x), log(x), expm1(x) and log1p(x) are accurate to within an ulpâ
which is the same for freebsd https://www.freebsd.org/cgi/man.cgi?query=exp&sektion= 3&apropos=0&manpath=freebsd: | 1706.02515#148 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 149 | âThe values of exp(0), expm1(0), exp2(integer), and pow(integer, integer) are exact provided that they are representable. Otherwise the error in these functions is generally below one ulp.â
The same holds for âFDLIBMâ http://www.netlib.org/fdlibm/readme:
âFDLIBM is intended to provide a reasonably portable (see assumptions below), reference quality (below one ulp for major functions like sin,cos,exp,log) math library (libm.a).â
In http://www.gnu.org/software/libc/manual/html_node/ Errors-in-Math-Functions.html we ï¬nd that both exp and erf have an error of 1 ulp while erfc has an error up to 3 ulps depending on the architecture. For the most common architectures as used by us, however, the error of erfc is 1 ulp.
We implemented the function in the programming language C. We rely on the GNU C Library [26]. According to the GNU C Library manual which can be obtained from http://www.gnu.org/
36
Fanation > 050 00 os TO Ts | 1706.02515#149 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 150 | 36
Fanation > 050 00 os TO Ts
Figure A4: Graphs of the upper and lower bounds on erfc. The lower bound â Ï( 2eâx2 â x2+2+x) (red), the
2eâx2 x2+ 4
e727 p . : upper bound AV) (green) and the function erfc(a) (blue) as treated in Lemma)22
software/libc/manual/pdf/libc.pdf, the errors of the math functions exp, erf, and erfc are not larger than 3 ulps for all architectures [26, pp. 528]. For the architectures ix86, i386/i686/fpu, and m68k/fpmu68k/m680x0/fpu that we used the error are at least one ulp [26, pp. 528].
# Intermediate Lemmata and Proofs | 1706.02515#150 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 151 | # Intermediate Lemmata and Proofs
Since we focus on the ï¬xed point (µ, ν) = (0, 1), we assume for our whole analysis that α = α01 and λ = λ01. Furthermore, we restrict the range of the variables µ â [µmin, µmax] = [â0.1, 0.1], Ï â [Ïmin, Ïmax] = [â0.1, 0.1], ν â [νmin, νmax] = [0.8, 1.5], and Ï â [Ïmin, Ïmax] = [0.8, 1.25].
For bounding different partial derivatives we need properties of different functions. We will bound a the absolute value of a function by computing an upper bound on its maximum and a lower bound on its minimum. These bounds are computed by upper or lower bounding terms. The bounds get tighter if we can combine terms to a more complex function and bound this function. The following lemmata give some properties of functions that we will use in bounding complex functions.
f i e~ | 1706.02515#151 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 152 | f i e~
Throughout this work, we use the error function erf(x) := 1â Ï function erfc(x) = 1 â erf(x). Lemma 21 (Basic functions). exp(x) is strictly monotonically increasing from 0 at ââ to â at â and has positive curvature.
According to its deï¬nition erfc(x) is strictly monotonically decreasing from 2 at ââ to 0 at â.
Next we introduce a bound on erfc: Lemma 22 (Erfc bound from Abramowitz).
nee enfe(s) < â (129) Va (Va? +242) ~ va (\/22+4 +2)
for x > 0.
Proof. The statement follows immediately from [1] (page 298, formula 7.1.13).
These bounds are displayed in ï¬gure A4.
37
x'exp('2)âerfotx)
# explx"2)"erfelx)
Figure A5: Graphs of the functions ex2 and Lemma 24, respectively. erfc(x) (left) and xex2 erfc(x) (right) treated in Lemma 23
Lemma 23 (Function ex2 has positive curvature (positive 2nd order derivative), that is, the decreasing slowes down. | 1706.02515#152 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 153 | Lemma 23 (Function ex2 has positive curvature (positive 2nd order derivative), that is, the decreasing slowes down.
A graph of the function is displayed in Figure A5.
# Proof. The derivative of ex2
erfc(x) is âex2 erfc(x) âx = 2ex2 x erfc(x) â 2 â Ï . (130)
erfc(x) is
Using Lemma 22, we get
deâ erfe(x) e.g x âââ = 2" rerfe(x) â < - <0 Ox (x) Jr vi ( +442) Jr Jr (131)
Thus ex2 The second order derivative of ex2
erfc(x) is strictly monotonically decreasing for x > 0.
â2ex2 erfc(x) âx2 = 4ex2 x2 erfc(x) + 2ex2 erfc(x) â 4x â Ï . (132)
Again using Lemma 22 (ï¬rst inequality), we get
: 2a 2( (2x? +1) ra erfe(x) â =.) > (133) | 1706.02515#153 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 154 | Again using Lemma 22 (ï¬rst inequality), we get
: 2a 2( (2x? +1) ra erfe(x) â =.) > (133)
4 (22? + 1) 4a Vi(va2+2+2) vr 4 (2? â Va? + 22 +1) Vi (Va? +240 4 (a? â Vat + 22? +1) 5 Va (Va? +242) 4 (2? â Vat + 2224141) Va (va? +242)
# 4x â Ï
â
=
>
= 0
For the last inequality we added 1 in the numerator in the square root which is subtracted, that is, making a larger negative term in the numerator.
38
< 0
Lemma 24 (Properties of xex2 tonically increasing to 1â Ï . erfc(x)). The function xex2 erfc(x) has the sign of x and is mono# Proof. The derivative of xex2
erfc(x) is
2ex2 x2 erfc(x) + ex2 erfc(x) â 2x â Ï . (134)
This derivative is positive since
2ex2 x2 erfc(x) + ex2 erfc(x) â 2x â Ï = (135) | 1706.02515#154 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 155 | This derivative is positive since
2ex2 x2 erfc(x) + ex2 erfc(x) â 2x â Ï = (135)
Qa 2 (2x? + 1) Qa 2((2a? +1) âa (Va? +2+2)) 2? (442 fol) â _ oF (20+ 1) erfe(a) Vn Vive +2) va Vi (Va? +242) 2(a? âaV2? +241) 2(2? -âaVa? +241) s 2 (x? -ny/2? + +2+1) Vi (Va? +242) Vit (Va? +242) Jit (Va? +242) 2 (x? â Vat + 20? +141) 2(2- V+? +1) Vi (Ve +240) Vi (Vi 4240) 0.
We apply Lemma 22 to x erfc(x)ex2
and divide the terms of the lemma by x, which gives
2 2 2 Flzvtni) < werfe(x)eâ < vt (Jae t1+1) : (136)
For limxââ both the upper and the lower bound go to 1â
# we
# Ï . | 1706.02515#155 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 156 | For limxââ both the upper and the lower bound go to 1â
# we
# Ï .
Lemma 25 (Function µÏ). h11(µ, Ï) = ÂµÏ is monotonically increasing in µÏ. It has minimal value t11 = â0.01 and maximal value T11 = 0.01.
# Proof. Obvious.
Lemma 26 (Function Î½Ï ). h22(ν, Ï ) = Î½Ï is monotonically increasing in Î½Ï and is positive. It has minimal value t22 = 0.64 and maximal value T22 = 1.875.
Proof. Obvious. Lemma 27 (Function µÏ+Î½Ï Î½Ï Î½Ï and µÏ. It has minimal value t1 = 0.5568 and maximal value T1 = 0.9734.
increasing in both
# Proof. The derivative of the function µÏ+xâ â x
with respect to x is
2
â 1 â 2 x â ÂµÏ + x â 2x3/2 2 = 2x â (ÂµÏ + x) â 2 2x3/2 = x â ÂµÏ â 2x3/2 2 > 0 , (137) | 1706.02515#156 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 157 | since x > 0.8 · 0.8 and ÂµÏ < 0.1 · 0.1. Lemma 28 (Function µÏ+2Î½Ï Î½Ï Î½Ï and µÏ. It has minimal value t2 = 1.1225 and maximal value T2 = 1.9417.
increasing in both
# Proof. The derivative of the function µÏ+2xâ â x â â
with respect to x is
2
2 x ÂµÏ + 2x â 2x3/2 2 â = 4x â (ÂµÏ + 2x) â 2 2x3/2 = 2x â ÂµÏ â 2x3/2 2 > 0 . (138)
39
=
µÏâ â 2 Lemma 29 (Function monotonically increasing in µÏ. T3 = 0.0088388. Î½Ï ). h3(µ, Ï, ν, Ï ) = µÏâ â monotonically decreasing in Î½Ï and It has minimal value t3 = â0.0088388 and maximal value 2 νÏ
Proof. Obvious.
2 | 1706.02515#157 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 158 | Proof. Obvious.
2
has a minimum at 0 for µ = 0 or Lemma 30 (Function Ï = 0 and has a maximum for the smallest Î½Ï and largest |µÏ| and is larger or equal to zero. It has minimal value t4 = 0 and maximal value T4 = 0.000078126.
Proof. Obvious.
â
â
Lemma 31 (Function 2 Ï (αâ1) â Î½Ï ). 2 Ï (αâ1) â Î½Ï > 0 and decreasing in Î½Ï .
Proof. Statements follow directly from elementary functions square root and division.
Lemma 32 (Function 2 â ert ( > 0 and decreasing in vt and increasing in |ww. stig) 2â (i)
Proof. Statements follow directly from Lemma[21]and erfc. Lemma 33 (Function V2 ( eae â fz). For
Lemma 33 (Function V2 ( eae â fz). For X = X and a = ago, V2 (Ge - ts) < 0 and increasing in both vt and jw.
Proof. We consider the function V2 ( (oD - <z); which has the derivative with respect to x:
2 a 3(a â 1)pw V2 (3S ~ 99572) (139)
This derivative is larger than zero, since | 1706.02515#158 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 159 | 2 a 3(a â 1)pw V2 (3S ~ 99572) (139)
This derivative is larger than zero, since
V7 ( Q _ oh) > v2 (o- ee) > 0. (140) T 2(v7r)3/2 2(v7)5/2 2(v7)3/2
The last inequality follows from α â 3·0.1·0.1(αâ1) 0.8·0.8
# > 0 for a = aor.
We next consider the function V2 (8 - Sz) , which has the derivative with respect to x:
(8 - Sz) J 2a
Ï (α â 1) (Î½Ï )3/2 > 0 . (141)
Lemma 34 (Function V2 (< De se p âatepe tr] avr) ). The function (v UT ~1)(aâ1)p2w? _ fi . . . . . . V2 (â Youreâ 4 âatawwtl avr) < 0 is decreasing in vt and increasing in jw. (uT)3/2 UT
Proof. We deï¬ne the function
2 ((-1)(a- 1)p?w? _ Tat apwt1 Vo ( aa | Ja avr (142) | 1706.02515#159 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 161 | 40
1 V2rx/2 (3(@ = 1)pPw? = x(-a + apw +1) â ax) .
The derivative of the term 3(α â 1)µ2Ï2 â x(âα + Î±ÂµÏ + 1) â αx2 with respect to x is â1 + α â µÏα â 2αx < 0, since 2αx > 1.6α. Therefore the term is maximized with the smallest value for x, which is x = Î½Ï = 0.8 · 0.8. For ÂµÏ we use for each term the value which gives maximal contribution. We obtain an upper bound for the term: 3(â0.1 · 0.1)2(α01 â 1) â (0.8 · 0.8)2α01 â 0.8 · 0.8((â0.1 · 0.1)α01 â α01 + 1) = â0.243569 . (144) Therefore the derivative with respect to x = Î½Ï is smaller than zero and the original function is decreasing in νÏ
We now consider the derivative with respect to x = µÏ. The derivative with respect to x of the function
Py} (a-1)a? | -a+ar+1
is | 1706.02515#161 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 162 | We now consider the derivative with respect to x = µÏ. The derivative with respect to x of the function
Py} (a-1)a? | -a+ar+1
is
Ï (Î±Î½Ï â 2(α â 1)x) (Î½Ï )3/2 Since â2x(â1 + α) + Î½Ï Î± > â2 · 0.01 · (â1 + α01) + 0.8 · 0.8α01 > 1.0574 > 0, the derivative is larger than zero. Consequently, the original function is increasing in µÏ.
The maximal value is obtained with the minimal Î½Ï = 0.8 · 0.8 and the maximal ÂµÏ = 0.1 · 0.1. The maximal value is
2 /0.1-0.1a91 âa01 +1 0.170.1?(â1)(a91 â 1) . \/ | â V0.8 +0. = â1.72296. *( 0.8 - 0.8001 72296 V0.8 - 0.8 (0.8 - 0.8)3/2
Therefore the original function is smaller than zero. | 1706.02515#162 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 163 | Therefore the original function is smaller than zero.
F 2 ( (e?=1)mw 302 Lemma 35 (Function V2 one â az }). For X= Xo1 and a = a1, 2 ( (0° ue 308) © 0 and i ing in both d 7 âCrp _â Vor < Oan increasing in DOIN VT and [LW
# Proof. The derivative of the function 2 2 T
2 [ (a? -1 302 2 ( (0% =I mw 3a (148) T 3/2 VE
with respect to x is 2 V2
2 ( 3a? 3 (a? â 1) pus 3 (a2x â (a? â 1) pw) V2 (Sr 7 2x5/2 V2rx5/2 > 0, (149)
since α2x â µÏ(â1 + α2) > α2 010.8 · 0.8 â 0.1 · 0.1 · (â1 + α2 01) > 1.77387
The derivative of the function
> 2_1)2 392 2 (o? =1)e _ 302 (150) T (vr)3/2 VT
with respect to x is
7 (o* ~1) ore > 0. (151) | 1706.02515#163 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 164 | with respect to x is
7 (o* ~1) ore > 0. (151)
The maximal function value is obtained by maximal vt = 1.5 - 1.25 and the maximal pw = 0.1 - 0.1. The maximal value is V2 (So - ist) â4.88869. Therefore the function is negative.
41
(147)
21) pw Lemma 36 (Function V2 (â2+ â 3a? v7) ). The function 2 2 ( (e?=1)mw 4 9 . . . . = (2+ â 3a°\V/vT |) < 0 is decreasing in vt and increasing in jw.
# Proof. The derivative of the function a
a (2 - nat) (152)
with respect to x is 2 =
2 (- (a? â 1) pw _ 3a? ) â (a? = 1) pw â 3022 = 273/2 2W/z Vora3/2 <0, (153)
since â3α2x â µÏ(â1 + α2) < â3α2 010.8 · 0.8 + 0.1 · 0.1(â1 + α2 01) < â5.35764.
# The derivative of the function
2 2a = (= - sat) (154) TT VT
with respect to x is
2 | 1706.02515#164 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 165 | # The derivative of the function
2 2a = (= - sat) (154) TT VT
with respect to x is
2
2 (2-1) â_ââ > 0. 155 VY VT ( )
The maximal function value is obtained for minimal vt = 0.8 - 0.8 and the maximal pw = 0. . 2 0.1. The value is 2 â â 3V0.8- 0804) = â5.34347. Thus, the function is negative.
L 37 (Functi (wwtur)? fo ( meter Th . (wwtvr)? fo ( mectur 0i emma 37 (Function vteâ 27 __ erfc (444). e function vTe~ 27 __ erfc (424) > 01s increasing in vt and decreasing in jw.
Proof. The derivative of the function
(wwe)? pw + x we 2 erfc 156 ( a) eo)
with respect to x is
aes (a(a + 2) â pw?) erfe (4g) pw â 2 . (157) Qa Vin JE
This derivative is larger than zero, since
peepee (vt(vT + 2) â p?w?) erfe (44) pw â VT oye Tad (158) | 1706.02515#165 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 166 | This derivative is larger than zero, since
peepee (vt(vT + 2) â p?w?) erfe (44) pw â VT oye Tad (158)
0.4349 (vt (vt + 2) â p?w*) Quer 0.5 (vr(vT + 2) â p?w?) _ Qnvt 0.5 (vt (ut + 2) â pPw?) +
â
+
+
# ÂµÏ â Î½Ï â â νÏ
# 2Ï
ÂµÏ â Î½Ï â â 2Ï Î½Ï Î½Ï (ÂµÏ â Î½Ï )
=
â
>
â
# 2ÏνÏ
=
42
â
â
â0.5µ2Ï2 + ÂµÏ â0.5µ2Ï2 + ÂµÏ â Î½Ï + 0.5(Î½Ï )2 â Î½Ï Î½Ï + Î½Ï â = 2ÏÎ½Ï â Î½Ï )2 + 0.25(Î½Ï )2 Î½Ï + (0.5Î½Ï â â 2ÏÎ½Ï > 0 .
We explain this chain of inequalities: | 1706.02515#166 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 167 | We explain this chain of inequalities:
V2V0r is strictly monotonically decreasing. The minimal value that is larger than 0.4349 is taken on at the maximal values v7 = 1.5 - 1.25 and puw = 0.1 - 0.1. wut)? . e The first inequality follows by applying Lenin says that eats erfe (4)
â
The second inequality uses 1
⢠The second inequality uses 1 2Ï = 0.545066 > 0.5.
2 0.4349
⢠The equalities are just algebraic reformulations.
â
⢠The last inequality follows from â0.5µ2Ï2 + ÂµÏ Î½Ï + 0.25(Î½Ï )2 > 0.25(0.8 · 0.8)2 â â 0.5 · (0.1)2(0.1)2 â 0.1 · 0.1 · 0.8 · 0.8 = 0.09435 > 0.
Therefore the function is increasing in Î½Ï . Decreasing in ÂµÏ follows from decreasing of ex2 form the fact that erfc and the exponential function are positive and that Î½Ï > 0.
Positivity follows | 1706.02515#167 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 168 | Positivity follows
w+2ur)? ws wher)? wQvr Lemma 38 (Function vre âtee erfe (4324). The function vrei erfe (<4 ) >0 is increasing in vt and decreasing in jw.
Proof. The derivative of the function
wx 2. ae = erfc (ââS) (159)
is
1+ 20)? pw Ea en (vice (2x (2x + 1) â pew Pu) exfc (4942") + Jx(pw â 22x) ) (160) no
# e
w+2x)? 5 3 We only have to determine the sign of (te â=~ (20 (2a + 1) â pw?) erfe (4324) +/2(pwâ 2x) since all other factors are obviously larger than zero.
This derivative is larger than zero, since | 1706.02515#168 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 169 | This derivative is larger than zero, since
(nw t2u7)? fiw + 2vT Vie ~ (2vr(2v7 + 1) â pw?) erfe Gree Dor \+ VT (ww â 2vr) > (161) 0.463979 (2v7(2v7 + 1) â pw?) + JvT(uw â 2vT) = â 0.463979)? w? + pw /VT + 1.85592(vT)? + 0.927958v7 â QTrVvT = pu (/vT â 0.463979) + 0.85592(v7)? + (ut â Sur * _ 0.0720421vr > 0.
We explain this chain of inequalities:
e The first inequality follows by applying Lemma 23] which says _ that eee rfc wee is strictly monotonically decreasing. The minimal value that is larger than 0.261772 is taken on at the maximal values vr = 1.5 - 1.25 and paw = 0.1- 0.1. 0.261772,/7 > 0.463979.
⢠The equalities are just algebraic reformulations.
43
â | 1706.02515#169 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 170 | ⢠The equalities are just algebraic reformulations.
43
â
e The last inequality follows from pw (V/VTâ0.463979nw) + 0.85592(vT)? â 0.0720421v7 > 0.85592 - (0.8 - 0.8)? â 0.1 - 0.1 (V1.5 - 1.25 + 0.1 - 0.1 - 0.463979) â 0.0720421 - 1.5 - 1.25 > 0.201766.
Therefore the function is increasing in Î½Ï . Decreasing in ÂµÏ follows from decreasing of ex2 from the fact that erfc and the exponential function are positive and that Î½Ï > 0.
Positivity follows
Lemma 39 (Bounds on the Derivatives). The following bounds on the absolute values of the deriva- tives of the Jacobian entries J11(µ, Ï, ν, Ï, λ, α), J12(µ, Ï, ν, Ï, λ, α), J21(µ, Ï, ν, Ï, λ, α), and J22(µ, Ï, ν, Ï, λ, α) with respect to µ, Ï, ν, and Ï hold: | 1706.02515#170 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 171 | a a < 0.0031049101995398316 (162) a OFa| â 4 5597237419419 Ow OF < 0.031242911235461816 Vv a oF < 0.03749149348255419
a oie < 0.031242911235461816 [g a Oz < 0.031242911235461816 Ow a OFis < 0.21232788238624354 a oie < 0.2124377655377270
a oF < 0.02220441024325437 [g a Ia) â 4 146955401845684 Ow a oF < 0.14983446469110305 a 292) â 9 17980135762932363 Or
a 222) â 9 4.4983446469110305 Ou a OToe < 0.14983446469110305 Ow a 222) â 1 395740052651535 Ov a oes < 2.396685907216327
44 | 1706.02515#171 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 172 | 44
Proof. For each derivative we compute a lower and an upper bound and take the maximum of the absolute value. A lower bound is determined by minimizing the single terms of the functions that represents the derivative. An upper bound is determined by maximizing the single terms of the functions that represent the derivative. Terms can be combined to larger terms for which the maximum and the minimum must be known. We apply many previous lemmata which state properties of functions representing single or combined terms. The more terms are combined, the tighter the bounds can be made.
Next we go through all the derivatives, where we use Lemma 25, Lemma 26, Lemma 27, Lemma 28, Lemma 29, Lemma 30, Lemma 21, and Lemma 23 without citing. Furthermore, we use the bounds on the simple expressions t11,t22, ..., and T4 as deï¬ned the aforementioned lemmata:
âJ11 âµ
â
wtur)? Z(aâ1). We use Lemma(3ifand consider the expression ae âars erfe ( st | - VEO) in brackets. An upper bound on the maximum of is
α01et2 1 erfc(t1) â Ï (α01 â 1) â T22 = 0.591017 . (163)
A lower bound on the minimum is | 1706.02515#172 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 173 | A lower bound on the minimum is
α01eT 2 1 erfc(T1) â Ï (α01 â 1) t22 â = 0.056318 . (164)
Thus, an upper bound on the maximal absolute value is
V2 ~1) 1 2 2 E o 4 4g 7 5 roiwinaxe aoe" erfe(t1) â Tn = 0.0031049101995398316 . (165)
âJ11 âÏ
â
We use Lemma and consider the expression Vee ue = a(pw + wotur)2 Veâ a orfe (44) in brackets.
An upper bound on the maximum is
Ï (α01 â 1)T11 t22 â â α01(t11 + 1)eT 2 1 erfc(T1) = â0.713808 . (166)
A lower bound on the minimum is V2 (oo â tu
Ï (α01 â 1)t11 t22 â â α01(T11 + 1)et2 1 erfc(t1) = â0.99987 . (167)
This term is subtracted, and 2 â erfc(x) > 0, therefore we have to use the minimum and the maximum for the argument of erfc.
Thus, an upper bound on the maximal absolute value is | 1706.02515#173 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 174 | Thus, an upper bound on the maximal absolute value is
14, (eu (yf#on =D (Tis + Vell erfe(ts) | âerfe(Ts) +2 = âe -âa + L)e! eric(t â erfe(T: = 301 Vin ora 1 3
1.055872374194189 .
45
(168)
âJ11 âν
We consider the term in brackets
(ustvy?® (pw tut) | [2 ((aâ1)pw a ae ente (MZ) : V7 ( ors? =) : (169)
# αe
We apply Lemma 33 for the ï¬rst sub-term. An upper bound on the maximum is
. 2 ao. â LT; ay . ae! erfe(ty) 4 V2 Me a or) 0.0104167 . (170) 22
A lower bound on the minimum is â2D agie⢠erfe(T1) 4 / Tv
α01eT 2 1 erfc(T1) + (α01 â 1)t11 t3/2 22 â α01â t22 = â0.95153 . (171)
Thus, an upper bound on the maximal absolute value is | 1706.02515#174 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 175 | Thus, an upper bound on the maximal absolute value is
1 2) 2 ao. â L)t fay 7 TdovTimaxtmaxe!4 (oo erfe(Ti) + fo ( ate a oe) (172) 122 0.031242911235461816 .
âJ11 âÏ We use the results of item âJ11 bound on the maximal absolute value is
âν were the brackets are only differently scaled. Thus, an upper
1 t 72 _./2 f(a -Vtu aor _ dot maxWnax⬠4 (oo 1 erfc(T}) 4 V3 { BP - los (173) 0.03749149348255419 .
âJ12 âµ Since âJ12
âµ = âJ11
âν , an upper bound on the maximal absolute value is
1 2. 2 { (aoi â Lt a ~ Hor Tmax maxe!4 (ie erfc(T;) 4 Vi ( we a se.) = (174) P22 0.031242911235461816 .
âJ12 âÏ We use the results of item âJ11 bound on the maximal absolute value is
âν were the brackets are only differently scaled. Thus, an upper | 1706.02515#175 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 176 | âJ12 âÏ We use the results of item âJ11 bound on the maximal absolute value is
âν were the brackets are only differently scaled. Thus, an upper
1 2 aoi â L)t a ~ PoittmaxTmnaxel⢠(es erfe(T1) 4 â (â 1 a se.) (175) 22 0.031242911235461816 .
â
OTi2 av For the second term in brackets, we see that a1 72 O01 TZ age erfe(t;) = 1.53644. We now check different values for v2 (-1)(a = 1)p?w? _ VT(a + apw â( V2 /F ' 3/2
mineT 2 1 erfc(T1) = 0.465793 and maxet2 1 erfc(t1) = 1.53644.
â
v2 (-1)(a = 1)p?w? _ VT(a + apw â 1) ar3/2 176 â( V2 /F ' 3/2 Ve ) ; (176)
46
where we maximize or minimize all single terms.
A lower bound on the minimum of this expression is | 1706.02515#176 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 177 | 46
where we maximize or minimize all single terms.
A lower bound on the minimum of this expression is
[2 f (-1)(@01 = Dp iraxWrrax , VTmin (oon + aoitii â 1) exon Tate (177) 7 De Finn vale Vimin âmin V Tmin âmax â 1.83112.
An upper bound on the maximum of this expression is
3/2 v2 (-D(@01 = DeininWmin , VTmax(@o1 + 01711 â 1) _ ante (178) 7 Vines V/Tinax we VP maxx 0.0802158 .
An upper bound on the maximum is
5 3/2 1) ret vz (-1)(@01 â DpRinein _ CorTein , (179) 8 7 Vines Tina Vi nax
1 8 â
Ïmax(α01 + α01T11 â 1) ν3/2 min maxet2 + α01Ï 2 1 erfc(t1) = 0.212328 .
A lower bound on the minimum is
1 goes 2 7
mineT 2 α01Ï 2 1 erfc(T1) + (180)
â | 1706.02515#177 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 178 | A lower bound on the minimum is
1 goes 2 7
mineT 2 α01Ï 2 1 erfc(T1) + (180)
â
(â1)(α01 â 1)µ2 ν5/2 Ïmin min maxÏ2 max â + Ïmin(α01 + α01t11 â 1) ν3/2 max â α01Ï 3/2 max â νmin = â 0.179318 .
Thus, an upper bound on the maximal absolute value is
5 3/2 dy rets v2 (=1)(a01 â Dpvinâmin Q01T nin t (181) 8 7 Uidese/ Tomas Vmax
1 8 â
Ïmax(α01 + α01T11 â 1) ν3/2 min maxet2 + α01Ï 2 1 erfc(t1) = 0.21232788238624354 .
âJ12 âÏ | 1706.02515#178 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 179 | âJ12 âÏ
We use Lemma B4]to obtain an upper bound on the maximum of the expression of the lemma: 2 (0.1? -0.1?(-1)(a01 - 1) (0.1-0.1)a01 â a1 + 1 = | ââ___+ 9 â V0.8 - 0.8001 4 â1.72296 Viz ( (0.8 -0.8)372 eo V08-0.8 )
2 (0.1? -0.1?(-1)(a01 - 1) (0.1-0.1)a01 â a1 + 1 = | ââ___+ 9 â V0.8 - 0.8001 4 â1.72296 . Viz ( (0.8 -0.8)372 eo V08-0.8 ) (182)
We use Lemma[34]|to obtain an lower bound on the minimum of the expression of the lemma: [2 (eo =) Vid 15a: 4 (â0.1 -0.1)ao1 â a1 + *) Tv (1.5 - 1.25)8/2 V1.5 - 1.25 | 1706.02515#179 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 180 | [2 (eo =) Vid 15a: 4 (â0.1 -0.1)ao1 â a1 + *) 9.2302. Tv (1.5 - 1.25)8/2 V1.5 - 1.25 (183)
wwtur)? Next we apply Lamia the expression vTe oor erfe (4 V5 2). We use Lemma] to obtain an upper bound on the maximum of this expression:
Next we apply Lamia the expression vTe oor erfe to obtain an upper bound on the maximum of this expression: (4.5-1.25â0.1.0.1)2 1.5-1.25â-0.1-0.1
(1.5·1.25â0.1·0.1)2 2·1.5·1.25 â â 1.5 · 1.25e α01 erfc 2 1.5 · 1.25 = 1.37381 . (184)
47
We use Lemma 37 to obtain an lower bound on the minimum of this expression: | 1706.02515#180 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 181 | 47
We use Lemma 37 to obtain an lower bound on the minimum of this expression:
(0.8-0.8+0.1-0.1)? . {(9.8-0.8+0.1-0.1 0.8- 0.867 20503 ~~ ao; erfe | ââââââââ_ } = 0.620462. 185 on ( V2V08-08 ) 89)
wwtuT)? " Next we apply Lemma]23}for dae âa erfe (444). An upper bound on this expres- sion is
(0.8-0.8â0.1-0.1)? 26 SIO ert ( 0.8 â 0.1-0.1 ao) = 1.96664. (186)
A lower bound on this expression is
1.5-1.25+0.1-0.1
2e (1.5·1.25+0.1·0.1)2 2·1.5·1.25 α01 erfc â â 2 1.5 · 1.25 = 1.4556 . (187) | 1706.02515#181 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 182 | The sum of the minimal values of the terms is â2.23019+0.62046+1.45560 = â0.154133. The sum of the maximal values of the terms is â1.72295 + 1.37380 + 1.96664 = 1.61749.
Thus, an upper bound on the maximal absolute value is 1 (+722)? (ty, + Tho =o e4 (corte 272 erfc Ce 8 V2/To2
1 8 (t11+T22 )2 2T22 λ01et4 â α01T22e erfc + (188)
2 -)T, erfc(t,) 4 v2 _ (G01 -1) T ty) = 0.2124377655377270 .
(α01 â 1)T 2 11 t3/2 22 2α01et2 â 1 erfc(t1) + â + âα01 + α01T11 + 1 t22 â â
1 Vt22))
âJ21 âµ
An upper bound on the maximum is
N31 Wrvax a2yelt (-e7"â¢) erfe(T)) + 202, ele! erfc(t2) â erfe(T3) + 2) = (189) 0.0222044 . | 1706.02515#182 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 183 | A upper bound on the absolute minimum is do Wrrax azyelt (âe~"*) erfe(t,) + 203
do Wrrax azyelt (âe~"*) erfe(t,) + 203 eT? eM erfc(T2) â erfe(t3) + 2) = (190) 0.00894889 .
Thus, an upper bound on the maximal absolute value is
azyett (-e7â¢) erfe(Ty) + 202, et2 el erfc(t2) â erfe(T3) + 2) = (91) 0.02220441024325437 .
âJ21 âÏ
An upper bound on the maximum is
01(2T11 + 1)et2 α2 λ2 01 (192)
etent erfc(t2) + 2T(2 â erfe(T3)) + 2 (-e7â¢) erfe(T)) + vFae") =
2 a, (tu + ett (-e7â¢) erfe(T)) + vFae") = 1.14696.
A lower bound on the minimum is | 1706.02515#183 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 184 | 2 a, (tu + ett (-e7â¢) erfe(T)) + vFae") = 1.14696.
A lower bound on the minimum is
rn (0: (Tn +1)e% (-e~"*) erfe(t1) + (193) a, (Qt + Det eT erfe(T2) + 2t11(2 â erfe(T3))+
48
2 |v") = â0.359403 .
Thus, an upper bound on the maximal absolute value is
01(2T11 + 1)et2 α2 λ2 01 (194)
ete erfe(tg) + 2711 (2 â erfc(T3)) + 2 (âe7â¢) erfe(T1) + (ivr) =
2 as (ti + Le⢠(âe7â¢) erfe(T1) + (ivr) = 1.146955401845684 .
âJ21 âν
An upper bound on the maximum is
v2) (i - 1) _ 2 e 2 c 501 Tmax max "1 ay (-e"*) erfe(Ty) + 409, e% erfe(t,) 4 VTn
1 2
0.149834 .
A lower bound on the minimum is | 1706.02515#184 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 185 | 1 2
0.149834 .
A lower bound on the minimum is
1 V/2(-1) (a8: - 1) 2 2. 501 Tmaxtmaxe 4 a6) (-e!?) erfc(t) + 402,e erfc(To) 4 Vto2
â 0.0351035 .
Thus, an upper bound on the maximal absolute value is
V/2(-1) (a8: - 1) 1 . . 501 Tmaxtmaxe 4 a, (-e"*) erfe(T)) + 4a? e!2 erfc(t2) 4 Vin
0.14983446469110305 .
âJ21 âÏ
An upper bound on the maximum is
1 /2(-1) (a8: - 1) 501 maxtmaxe" a64 (-e7') erfe(T1) + 4a®,e! erfc(t2) 4 *
0.179801 .
A lower bound on the minimum is
1 V2(-1) (61 â 1) 501M maxtmaxe a6) (-e"â) erfc(t,) + 4a2,et2 erfc(T2) + 4
1 2
â 0.0421242 .
Thus, an upper bound on the maximal absolute value is | 1706.02515#185 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 186 | 1 2
â 0.0421242 .
Thus, an upper bound on the maximal absolute value is
1 V2(-1) (261 ~ 1) 501M maxtmaxe aa, (-e"â) erfe(T1) + 4a®,e! erfc(t2) 4 a
0.17980135762932363 .
49
(195)
(196)
(197)
(198)
(199)
(200)
âJ22 âµ We use the fact that âJ22
âµ = âJ21 α2 01
Jax | Thus, an upper bound on the maximal absolute value is 2 2 ; ; V2(-1) (a8, â 1)
We use the fact that Fae = Jax | Thus, an upper bound on the maximal absolute value is
2
2 2 ; ; V2(-1) (a8, â 1) p01 TinaxWinaxe aay (-e7*) erfe(T)) + 4a? e!2 erfc(t2) 4 Vin
1 2
0.14983446469110305 .
âJ22 âÏ
An upper bound on the maximum is | 1706.02515#186 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 187 | 1 2
0.14983446469110305 .
âJ22 âÏ
An upper bound on the maximum is
2 2 1 P 2-1) (a8, _ 1) 5 0iMmaxTmax® 4 apy (-e"') erfe(T,) + 4a2, el erfc(t2) 4 VTh2
0.149834 .
A lower bound on the minimum is
2 2 1 _ : ; 2-1) (1 â 1) 501 HmaxTimaxe 4 [a2 (-e'â) erfe(t1) + 4ae,et? erfe(T2) 4 Vin
â 0.0351035 .
Thus, an upper bound on the maximal absolute value is
2 1 v2 5 0iMmaxTinax® apy (-e') erfe(T)) + 402, e2 erfc(tz) +4
0.14983446469110305 .
âJ22 âν
21) ww We apply Lemma}35]to the expression 2 (Sts - 3). Using Lemna an upper bound on the maximum is
upper bound on the maximum is 1 DrorTinax® 2 ((a1-1Tu y2 (6 2) us TEL? | 1706.02515#187 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 188 | upper bound on the maximum is 1 DrorTinax® 2 ((a1-1Tu y2 (6 2) us TEL?
1 5 DrorTinax® (a3, (-e7') erfe(T)) + 802, e!2 erfc(t2) + (205) 2 ((a1-1Tu 308 y2 (6 2) 7a _ Soi 1.19441. us TEL? VT22
Using Lemma 35, a lower bound on the minimum is
1 5 5 DrorTinax® (a3, (-eâ) erfe(t) + 8a2,e7? erfc(T2) + (206) 2 ((a1-1)tu â 3a3 v2 (ââ =) 1 a )) = -1.80574. u t95 V 022
Thus, an upper bound on the maximal absolute value is
- PrTRawe Ga (-e"â) erfe(t1) + 802,62 erfce(T2) + (207) /2 (ag, - 1) ti = 302, 5 â = 1.805740052651535 . 7 ( Be Vi22
50
(201)
(202)
(203)
(204)
(207)
âJ22 âÏ | 1706.02515#188 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 189 | 50
(201)
(202)
(203)
(204)
(207)
âJ22 âÏ
Or : FZ ( (e?=1)nw 4 9 We apply Lemma)|36jto the expression V2 Te 3a°VJUT }. pwtur)? We apply Lemma|37|to the expression ute âses erfe (424) . We apply Lemma|38}to . (Hw t2u7)? 2 the expression vre~ 2-7 _ erfe ( #24247 ) | P V2VuT
We combine the results of these lemmata to obtain an upper bound on the maximum:
1,. (Mitte)? ('T; tos âd2, (-editmee âFiaa ene ( ut =) + (208) 4 V2Vin . 4, Gart2722)? (ty +279 8ap,;Tose e277 erfe a â V2VTx 2027 e~⢠erfe(T,) + 402, ¢!2e~" erfc(t) + 2(2 â erfc(T3)) + 2 on, (ad - 1) Ti 2 wa yf ae ( OL _ 302, Vi = 2.39669 . 7 ( Vin
We combine the results of these lemmata to obtain an lower bound on the minimum: | 1706.02515#189 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 191 | Thus, an upper bound on the maximal absolute value is
1 â Gitta? (Ty +t pn (-editmete âWaa ane ( ut 2) + (210) V2\/ta2 (142729)? f ( + =) e 22 erfc | â~ââ } â V2VTho 2a2,eT eT erfc(T,) + dad ele erfe(tz) + 2(2 â erfe(T3)) + 2-7, ( (oi -1) Ti 2 anne 7 =e â t. = 2.36 907216327 . / =e ( = 3091 Vio2 396685907216327 2 -t 8a91To2e Ny
Lemma 40 (Derivatives of the Mapping). We assume α = α01 and λ = λ01. We restrict the range of the variables to the domain µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.8, 1.25].
The derivative â ⵠ˵(µ, Ï, ν, Ï, λ, α) has the sign of Ï. | 1706.02515#191 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.