doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1706.02515 | 320 | e (1.37713 erfc(0.883883y + 0.565685) â 1.37349¢~0-78125(440.60") <0 = (1.37713 erfe(0.883883y + 0.565685) â 1.37349¢~0-78129(0-0.68)") <0 (. 37713 erfe(0.883883y + 0.565685) â 1.37349" 0-78129(040.68)") < (. 37713 erfc(0.883883 - 0.01 + 0.565685) â 1.37349e~0-78125(0.01+40.6 oa) = 0.5935272325870631 â 0.987354705867739 < 0.
Therefore, the values SY = 0.64 and « = â0.01 give a global maximum of the function f(x, y) in the domain â0.01 < y < 0.01 and 0.64 < x < 1.875 and the values y = 1.875 and a = 0.01 give the global minimum.
# A4 Additional information on experiments
In this section, we report the hyperparameters that were considered for each method and data set and give details on the processing of the data sets.
84
(320) | 1706.02515#320 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 321 | In this section, we report the hyperparameters that were considered for each method and data set and give details on the processing of the data sets.
84
(320)
# 121 UCI Machine Learning Repository data sets: Hyperparameters
For the UCI data sets, the best hyperparameter setting was determined by a grid-search over all hyperparameter combinations using 15% of the training data as validation set. The early stopping parameter was determined on the smoothed learning curves of 100 epochs of the validation set. Smoothing was done using moving averages of 10 consecutive values. We tested ârectangularâ and âconicâ layers â rectangular layers have constant number of hidden units in each layer, conic layers start with the given number of hidden units in the ï¬rst layer and then decrease the number of hidden units to the size of the output layer according to the geometric progession. If multiple hyperparameters provided identical performance on the validation set, we preferred settings with a higher number of layers, lower learning rates and higher dropout rates. All methods had the chance to adjust their hyperparameters to the data set at hand.
Table A4: Hyperparameters considered for self-normalizing networks in the UCI data sets. | 1706.02515#321 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 322 | Table A4: Hyperparameters considered for self-normalizing networks in the UCI data sets.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Dropout rate Layer form {1024, 512, 256} {2, 3, 4, 8, 16, 32} {0.01, 0.1, 1} {0.05, 0} {rectangular, conic}
Table A5: Hyperparameters considered for ReLU networks with MS initialization in the UCI data sets.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Dropout rate Layer form {1024, 512, 256} {2,3,4,8,16,32} {0.01, 0.1, 1} {0.5, 0} {rectangular, conic}
Table A6: Hyperparameters considered for batch normalized networks in the UCI data sets.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Layer form {1024, 512, 256} {2, 3, 4, 8, 16, 32} {0.01, 0.1, 1} {Batchnorm} {rectangular, conic}
85
Table A7: Hyperparameters considered for weight normalized networks in the UCI data sets. | 1706.02515#322 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 323 | 85
Table A7: Hyperparameters considered for weight normalized networks in the UCI data sets.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Layer form {1024, 512, 256} {2, 3, 4, 8, 16, 32} {0.01, 0.1, 1} {Weightnorm} {rectangular, conic}
Table A8: Hyperparameters considered for layer normalized networks in the UCI data sets.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Layer form {1024, 512, 256} {2, 3, 4, 8, 16, 32} {0.01, 0.1, 1} {Layernorm} {rectangular, conic}
Table A9: Hyperparameters considered for Highway networks in the UCI data sets.
Hyperparameter Considered values Number of hidden layers Learning rate Dropout rate {2, 3, 4, 8, 16, 32} {0.01, 0.1, 1} {0, 0.5}
Table A10: Hyperparameters considered for Residual networks in the UCI data sets. | 1706.02515#323 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 325 | Methods compared. We used data sets and preprocessing scripts by Fernández-Delgado et al. [10] for data preparation and deï¬ning training and test sets. With several ï¬aws in the method comparison[37] that we avoided, the authors compared 179 machine learning methods of 17 groups in their experiments. The method groups were deï¬ned by Fernández-Delgado et al. [10] as follows: Support Vector Machines, RandomForest, Multivariate adaptive regression splines (MARS), Boosting, Rule-based, logistic and multinomial regression, Discriminant Analysis (DA), Bagging, Nearest Neighbour, DecisionTree, other Ensembles, Neural Networks, Bayesian, Other Methods, generalized linear models (GLM), Partial least squares and principal component regression (PLSR), and Stacking. However, many of methods assigned to those groups were merely different implementations of the same method. Therefore, we selected one representative of each of the 17 groups for method compar- ison. The representative method was chosen as the groupâs method with the median performance across all tasks. Finally, we included 17 other machine learning methods of Fernández-Delgado et | 1706.02515#325 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 327 | Results of FNN methods for all 121 data sets. The results of the compared FNN methods can be found in Table A11.
Small and large data sets. We assigned each of the 121 UCI data sets into the group âlarge datasetsâ or âsmall datasetsâ if the had more than 1,000 data points or less, respectively. We expected that Deep Learning methods require large data sets to competitive to other machine learning methods. This resulted in 75 small and 46 large data sets.
Results. The results of the method comparison are given in Tables A12 and A13 for small and large data sets, respectively. On small data sets, SVMs performed best followed by RandomForest and SNNs. On large data sets, SNNs are the best method followed by SVMs and Random Forest.
87 | 1706.02515#327 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 328 | 87
Table A11: Comparison of FNN methods on all 121 UCI data sets.. The table reports the accuracy of FNN methods at each individual task of the 121 UCI data sets. The ï¬rst column gives the name of the data set, the second the number of training data points N , the third the number of features M and the consecutive columns the accuracy values of self-normalizing networks (SNNs), ReLU networks without normalization and with MSRA initialization (MS), Highway networks (HW), Residual Networks (ResNet), networks with batch normalization (BN), weight normalization (WN), and layer normalization (LN). | 1706.02515#328 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 329 | dataset N M SNN MS HW ResNet BN abalone acute-inï¬ammation acute-nephritis adult annealing arrhythmia audiology-std balance-scale balloons bank blood breast-cancer breast-cancer-wisc breast-cancer-wisc-diag breast-cancer-wisc-prog breast-tissue car cardiotocography-10clases cardiotocography-3clases chess-krvk chess-krvkp congressional-voting conn-bench-sonar-mines-rocks conn-bench-vowel-deterding connect-4 contrac credit-approval cylinder-bands dermatology echocardiogram ecoli energy-y1 energy-y2 fertility ï¬ags glass haberman-survival hayes-roth heart-cleveland heart-hungarian heart-switzerland heart-va hepatitis hill-valley horse-colic ilpd-indian-liver 4177 120 120 48842 898 452 196 625 16 4521 748 286 699 569 198 106 1728 2126 2126 28056 3196 435 208 990 67557 1473 690 512 366 131 336 768 768 100 194 214 306 160 303 294 123 200 155 1212 368 583 9 7 7 15 32 263 60 5 5 17 5 10 10 31 34 10 | 1706.02515#329 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 330 | 1473 690 512 366 131 336 768 768 100 194 214 306 160 303 294 123 200 155 1212 368 583 9 7 7 15 32 263 60 5 5 17 5 10 10 31 34 10 7 22 22 7 37 17 61 12 43 10 16 36 35 11 8 9 9 10 29 10 4 4 14 13 13 13 20 101 26 10 0.6657 1.0000 1.0000 0.8476 0.7600 0.6549 0.8000 0.9231 1.0000 0.8903 0.7701 0.7183 0.9714 0.9789 0.6735 0.7308 0.9838 0.8399 0.9153 0.8805 0.9912 0.6147 0.7885 0.9957 0.8807 0.5190 0.8430 0.7266 0.9231 0.8182 0.8929 0.9583 0.9063 0.9200 0.4583 0.7358 0.7368 0.6786 0.6184 0.7945 0.3548 0.3600 0.7692 0.5248 0.8088 0.6986 0.6284 1.0000 1.0000 0.8487 0.7300 0.6372 0.6800 0.9231 0.5000 0.8876 0.7754 0.6901 0.9714 | 1706.02515#330 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 331 | 0.8487 0.7300 0.6372 0.6800 0.9231 0.5000 0.8876 0.7754 0.6901 0.9714 0.9718 0.7347 0.4615 0.9861 0.8418 0.8964 0.8606 0.9900 0.6055 0.8269 0.9935 0.8831 0.5136 0.8430 0.7656 0.9121 0.8485 0.8333 0.9583 0.8958 0.8800 0.4583 0.6038 0.7237 0.4643 0.6053 0.8356 0.3871 0.2600 0.7692 0.5116 0.8529 0.6644 0.6427 1.0000 1.0000 0.8453 0.3600 0.6283 0.7200 0.9103 0.2500 0.8885 0.7968 0.7465 0.9771 0.9789 0.8367 0.6154 0.9560 0.8456 0.9171 0.5255 0.9900 0.5872 0.8462 0.9784 0.8599 0.5054 0.8547 0.7969 0.9780 0.6061 0.8690 0.8802 | 1706.02515#331 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 332 | 0.8462 0.9784 0.8599 0.5054 0.8547 0.7969 0.9780 0.6061 0.8690 0.8802 0.9010 0.8800 0.4375 0.6415 0.6447 0.7857 0.6316 0.7945 0.5806 0.4000 0.6667 0.5000 0.7794 0.6781 0.6466 1.0000 1.0000 0.8484 0.2600 0.6460 0.8000 0.9167 1.0000 0.8796 0.8021 0.7465 0.9714 0.9507 0.8163 0.4231 0.9282 0.8173 0.9021 0.8543 0.9912 0.5963 0.8077 0.9935 0.8716 0.5136 0.8430 0.7734 0.9231 0.8485 0.8214 0.8177 0.8750 0.8400 0.3750 0.6415 0.6842 0.7143 0.5658 0.8082 0.3226 0.2600 0.7692 0.5396 0.8088 0.6712 0.6303 1.0000 1.0000 0.8499 0.1200 0.5929 0.6400 | 1706.02515#332 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 333 | 0.5396 0.8088 0.6712 0.6303 1.0000 1.0000 0.8499 0.1200 0.5929 0.6400 0.9231 1.0000 0.8823 0.7647 0.7324 0.9829 0.9789 0.7755 0.4615 0.9606 0.7910 0.9096 0.8781 0.9862 0.5872 0.7115 0.9610 0.8729 0.4538 0.8721 0.7500 0.9341 0.8485 0.8214 0.8646 0.8750 0.6800 0.4167 0.5849 0.7368 0.7500 0.5789 0.8493 0.3871 0.2800 0.8718 0.5050 0.8529 0.5959 WN 0.6351 1.0000 1.0000 0.8453 0.6500 0.6018 0.7200 0.9551 0.0000 0.8850 0.7594 0.6197 0.9657 0.9718 0.8367 0.5385 0.9769 0.8606 0.8945 0.7673 0.9912 0.5872 0.8269 0.9524 0.8833 0.4755 0.9070 | 1706.02515#333 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 336 | image-segmentation ionosphere iris led-display lenses letter libras low-res-spect lung-cancer lymphography magic mammographic miniboone molec-biol-promoter molec-biol-splice monks-1 monks-2 monks-3 mushroom musk-1 musk-2 nursery oocytes_merluccius_nucleus_4d oocytes_merluccius_states_2f oocytes_trisopterus_nucleus_2f oocytes_trisopterus_states_5b optical ozone page-blocks parkinsons pendigits pima pittsburg-bridges-MATERIAL pittsburg-bridges-REL-L pittsburg-bridges-SPAN pittsburg-bridges-T-OR-D pittsburg-bridges-TYPE planning plant-margin plant-shape plant-texture post-operative primary-tumor ringnorm seeds semeion soybean spambase spect spectf statlog-australian-credit statlog-german-credit 2310 351 150 1000 24 20000 360 531 32 148 19020 961 130064 106 3190 556 601 554 8124 476 6598 12960 1022 1022 912 912 5620 2536 5473 195 10992 768 106 | 1706.02515#336 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 337 | 148 19020 961 130064 106 3190 556 601 554 8124 476 6598 12960 1022 1022 912 912 5620 2536 5473 195 10992 768 106 103 92 102 105 182 1600 1600 1599 90 330 7400 210 1593 683 4601 265 267 690 1000 19 34 5 8 5 17 91 101 57 19 11 6 51 58 61 7 7 7 22 167 167 9 42 26 26 33 63 73 11 23 17 9 8 8 8 8 8 13 65 65 65 9 18 21 8 257 36 58 23 45 15 25 0.9114 0.8864 0.9730 0.7640 0.6667 0.9726 0.7889 0.8571 0.6250 0.9189 0.8692 0.8250 0.9307 0.8462 0.9009 0.7523 0.5926 0.6042 1.0000 0.8739 0.9891 0.9978 0.8235 0.9529 0.7982 0.9342 0.9711 0.9700 0.9583 0.8980 0.9706 0.7552 0.8846 0.6923 0.6957 0.8400 0.6538 0.6889 0.8125 0.7275 0.8125 0.7273 0.5244 0.9751 0.8846 0.9196 | 1706.02515#337 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 338 | 0.6538 0.6889 0.8125 0.7275 0.8125 0.7273 0.5244 0.9751 0.8846 0.9196 0.8511 0.9409 0.6398 0.4973 0.5988 0.7560 0.9090 0.9091 0.9189 0.7200 1.0000 0.9712 0.8667 0.8496 0.3750 0.7297 0.8629 0.8083 0.9250 0.7692 0.8482 0.6551 0.6343 0.7454 1.0000 0.8655 0.9945 0.9988 0.8196 0.9490 0.8728 0.9430 0.9666 0.9732 0.9708 0.9184 0.9714 0.7656 0.8462 0.7692 0.5217 0.8800 0.6538 0.6667 0.8125 0.6350 0.7900 0.7273 0.5000 0.9843 0.8654 0.9296 0.8723 0.9461 0.6183 0.6043 0.6802 0.7280 0.9024 0.9432 0.8378 0.7040 1.0000 0.8984 0.8222 | 1706.02515#338 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 339 | 0.6043 0.6802 0.7280 0.9024 0.9432 0.8378 0.7040 1.0000 0.8984 0.8222 0.9023 0.1250 0.7297 0.8673 0.7917 0.9270 0.6923 0.8833 0.5833 0.6389 0.5880 1.0000 0.8992 0.9915 1.0000 0.7176 0.9490 0.8289 0.9342 0.9644 0.9716 0.9656 0.8367 0.9671 0.7188 0.9231 0.6923 0.5652 0.8800 0.5385 0.6000 0.8375 0.6325 0.7900 0.5909 0.4512 0.9692 0.9423 0.9447 0.8617 0.9435 0.6022 0.8930 0.6802 0.7760 0.8919 0.9545 0.9730 0.7160 0.6667 0.9762 0.7111 0.8647 0.2500 0.6757 0.8723 0.7833 0.9254 0.7692 0.8557 0.7546 0.6273 0.5833 1.0000 0.8739 0.9964 | 1706.02515#339 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 340 | 0.7833 0.9254 0.7692 0.8557 0.7546 0.6273 0.5833 1.0000 0.8739 0.9964 0.9994 0.8000 0.9373 0.7719 0.8947 0.9627 0.9669 0.9605 0.9184 0.9708 0.7135 0.9231 0.8462 0.5652 0.8800 0.6538 0.7111 0.7975 0.5150 0.8000 0.7273 0.3902 0.9811 0.8654 0.9146 0.8670 0.9461 0.6667 0.7005 0.6395 0.7720 0.8481 0.9432 0.9189 0.6280 0.8333 0.9796 0.7444 0.8571 0.5000 0.7568 0.8713 0.8167 0.9262 0.7692 0.8519 0.9074 0.3287 0.5278 0.9990 0.8235 0.9982 0.9994 0.8078 0.9333 0.7456 0.8947 0.9716 0.9669 0.9613 0.8571 0.9734 0.7188 0.8846 0.7692 | 1706.02515#340 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 341 | 0.7456 0.8947 0.9716 0.9669 0.9613 0.8571 0.9734 0.7188 0.8846 0.7692 0.5652 0.8800 0.1154 0.6222 0.7600 0.2850 0.8200 0.5909 0.5122 0.9843 0.8654 0.9372 0.8883 0.9426 0.6344 0.2299 0.6802 0.7520 0.8938 0.9318 1.0000 0.6920 0.8333 0.9580 0.8000 0.8872 0.5000 0.7568 0.8690 0.8292 0.9272 0.6923 0.8494 0.5000 0.6644 0.5231 0.9995 0.8992 0.9927 0.9966 0.8078 0.9020 0.7939 0.9254 0.9638 0.9748 0.9730 0.8163 0.9620 0.6979 0.8077 0.6538 0.6522 0.8800 0.4615 0.6444 0.8175 0.6575 0.8175 0.5455 0.5000 0.9719 0.8846 0.9322 0.8537 0.9504 | 1706.02515#341 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 344 | statlog-heart statlog-image statlog-landsat statlog-shuttle statlog-vehicle steel-plates synthetic-control teaching thyroid tic-tac-toe titanic trains twonorm vertebral-column-2clases vertebral-column-3clases wall-following waveform waveform-noise wine wine-quality-red wine-quality-white yeast zoo 270 2310 6435 58000 846 1941 600 151 7200 958 2201 10 7400 310 310 5456 5000 5000 178 1599 4898 1484 101 14 19 37 10 19 28 61 6 22 10 4 0.9254 0.9549 0.9100 0.9990 0.8009 0.7835 0.9867 0.5000 0.9816 0.9665 0.7836 30 NA 21 7 7 25 22 41 14 12 12 9 17 0.9805 0.8312 0.8312 0.9098 0.8480 0.8608 0.9773 0.6300 0.6373 0.6307 0.9200 0.8358 0.9757 0.9075 0.9983 0.8294 0.7567 0.9800 0.6053 0.9770 0.9833 0.7909 NA 0.9778 0.8701 0.8052 | 1706.02515#344 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 345 | 0.7567 0.9800 0.6053 0.9770 0.9833 0.7909 NA 0.9778 0.8701 0.8052 0.9076 0.8312 0.8328 0.9318 0.6250 0.6479 0.6173 1.0000 0.7761 0.9584 0.9110 0.9977 0.7962 0.7608 0.9867 0.5263 0.9708 0.9749 0.7927 NA 0.9708 0.8571 0.7922 0.9230 0.8320 0.8696 0.9091 0.5625 0.5564 0.6065 0.8800 0.8657 0.9584 0.9055 0.9992 0.7583 0.7629 0.9600 0.5526 0.9799 0.9623 0.7727 NA 0.9735 0.8312 0.7532 0.9223 0.8360 0.8584 0.9773 0.6150 0.6307 0.5499 1.0000 0.7910 0.9671 0.9040 0.9988 0.7583 0.7031 0.9733 0.5000 0.9778 0.9833 0.7800 0.5000 0.9757 | 1706.02515#345 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 346 | 0.9988 0.7583 0.7031 0.9733 0.5000 0.9778 0.9833 0.7800 0.5000 0.9757 0.8312 0.7792 0.9333 0.8360 0.8480 0.9773 0.5450 0.5335 0.4906 0.7200 0.8657 0.9515 0.8925 0.9988 0.8009 0.7856 0.9867 0.3158 0.9807 0.9707 0.7818 0.5000 0.9730 0.6623 0.7403 0.9274 0.8376 0.8640 0.9773 0.5575 0.5482 0.5876 0.9600 0.7910 0.9757 0.9040 0.9987 0.7915 0.7588 0.9733 0.6316 0.9752 0.9791 0.7891 1.0000 0.9724 0.8442 0.8312 0.9128 0.8448 0.8504 0.9773 0.6100 0.6544 0.6092 0.9600 | 1706.02515#346 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 347 | Table A12: UCI comparison reporting the average rank of a method on 75 classiï¬cation task of the UCI machine learning repository with less than 1000 data points. For each dataset, the 24 compared methods, were ranked by their accuracy and the ranks were averaged across the tasks. The ï¬rst column gives the method group, the second the method, the third the average rank , and the last the p-value of a paired Wilcoxon test whether the difference to the best performing method is signiï¬cant. SNNs are ranked third having been outperformed by Random Forests and SVMs. | 1706.02515#347 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 348 | methodGroup method avg. rank p-value SVM RandomForest SNN LMR NeuralNetworks MARS MSRAinit LayerNorm Highway DiscriminantAnalysis mda_R Boosting Bagging ResNet BatchNorm Rule-based WeightNorm DecisionTree OtherEnsembles Nearest Neighbour OtherMethods PLSR Bayesian GLM Stacking LibSVM_weka RRFglobal_caret SNN SimpleLogistic_weka lvq_caret gcvEarth_caret MSRAinit LayerNorm Highway LogitBoost_weka ctreeBag_R ResNet BatchNorm JRip_caret WeightNorm rpart2_caret Dagging_weka NNge_weka pam_caret simpls_R NaiveBayes_weka bayesglm_caret Stacking_weka 9.3 9.6 9.6 9.9 10.1 10.7 11.0 11.3 11.5 11.8 11.9 12.1 12.3 12.6 12.9 13.0 13.6 13.9 14.0 14.2 14.3 14.6 15.0 20.9 2.5e-01 3.8e-01 1.5e-01 1.0e-01 3.6e-02 4.0e-02 7.2e-02 | 1706.02515#348 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 350 | 90
Table A13: UCI comparison reporting the average rank of a method on 46 classiï¬cation task of the UCI machine learning repository with more than 1000 data points. For each dataset, the 24 compared methods, were ranked by their accuracy and the ranks were averaged across the tasks. The ï¬rst column gives the method group, the second the method, the third the average rank , and the last the p-value of a paired Wilcoxon test whether the difference to the best performing method is signiï¬cant. SNNs are ranked ï¬rst having outperformed diverse machine learning methods and other FNNs. | 1706.02515#350 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 351 | methodGroup method avg. rank p-value SNN SVM RandomForest MSRAinit LayerNorm Highway ResNet WeightNorm BatchNorm MARS Boosting LMR Rule-based Bagging DiscriminantAnalysis mda_R Nearest Neighbour DecisionTree OtherEnsembles NeuralNetworks Bayesian OtherMethods GLM PLSR Stacking SNN LibSVM_weka RRFglobal_caret MSRAinit LayerNorm Highway ResNet WeightNorm BatchNorm gcvEarth_caret LogitBoost_weka SimpleLogistic_weka JRip_caret ctreeBag_R NNge_weka rpart2_caret Dagging_weka lvq_caret NaiveBayes_weka pam_caret bayesglm_caret simpls_R Stacking_weka 5.8 6.1 6.6 7.1 7.2 7.9 8.4 8.7 9.7 9.9 12.1 12.4 12.4 13.5 13.9 14.1 15.5 16.1 16.3 17.9 18.3 18.7 19.0 22.5 5.8e-01 2.1e-01 4.5e-03 7.1e-02 1.7e-03 1.7e-04 5.5e-04 | 1706.02515#351 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 353 | 91
# A4.3 Tox21 challenge data set: Hyperparameters
For the Tox21 data set, the best hyperparameter setting was determined by a grid-search over all hyperparameter combinations using the validation set deï¬ned by the challenge winners [28]. The hyperparameter space was chosen to be similar to the hyperparameters that were tested by Mayr et al. [28]. The early stopping parameter was determined on the smoothed learning curves of 100 epochs of the validation set. Smoothing was done using moving averages of 10 consecutive values. We tested ârectangularâ and âconicâ layers â rectangular layers have constant number of hidden units in each layer, conic layers start with the given number of hidden units in the ï¬rst layer and then decrease the number of hidden units to the size of the output layer according to the geometric progession. All methods had the chance to adjust their hyperparameters to the data set at hand.
Table A14: Hyperparameters considered for self-normalizing networks in the Tox21 data set. | 1706.02515#353 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 354 | Table A14: Hyperparameters considered for self-normalizing networks in the Tox21 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Dropout rate Layer form L2 regularization parameter {1024, 2048} {2,3,4,6,8,16,32} {0.01, 0.05, 0.1} {0.05, 0.10} {rectangular, conic} {0.001,0.0001,0.00001}
Table A15: Hyperparameters considered for ReLU networks with MS initialization in the Tox21 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Dropout rate Layer form L2 regularization parameter {1024, 2048} {2,3,4,6,8,16,32} {0.01, 0.05, 0.1} {0.5, 0} {rectangular, conic} {0.001,0.0001,0.00001}
Table A16: Hyperparameters considered for batch normalized networks in the Tox21 data set. | 1706.02515#354 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 355 | Table A16: Hyperparameters considered for batch normalized networks in the Tox21 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Layer form L2 regularization parameter {1024, 2048} {2, 3, 4, 6, 8, 16, 32} {0.01, 0.05, 0.1} {Batchnorm} {rectangular, conic} {0.001,0.0001,0.00001}
92
Table A17: Hyperparameters considered for weight normalized networks in the Tox21 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Dropout rate Layer form L2 regularization parameter {1024, 2048} {2, 3, 4, 6, 8, 16, 32} {0.01, 0.05, 0.1} {Weightnorm} {0, 0.5} {rectangular, conic} {0.001,0.0001,0.00001}
Table A18: Hyperparameters considered for layer normalized networks in the Tox21 data set. | 1706.02515#355 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 356 | Table A18: Hyperparameters considered for layer normalized networks in the Tox21 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Dropout rate Layer form L2 regularization parameter {1024, 2048} {2, 3, 4, 6, 8, 16, 32} {0.01, 0.05, 0.1} {Layernorm} {0, 0.5} {rectangular, conic} {0.001,0.0001,0.00001}
Table A19: Hyperparameters considered for Highway networks in the Tox21 data set.
Hyperparameter Considered values Number of hidden layers Learning rate Dropout rate L2 regularization parameter {2, 3, 4, 6, 8, 16, 32} {0.01, 0.05, 0.1} {0, 0.5} {0.001,0.0001,0.00001}
Table A20: Hyperparameters considered for Residual networks in the Tox21 data set. | 1706.02515#356 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 357 | Table A20: Hyperparameters considered for Residual networks in the Tox21 data set.
Hyperparameter Considered values Number of blocks Number of neurons per blocks Block form Bottleneck Learning rate L2 regularization parameter {2, 3, 4, 6, 8, 16} {1024, 2048} {rectangular, diavolo} {25%, 50%} {0.01, 0.05, 0.1} {0.001,0.0001,0.00001}
93
density network inputs network inputs
# density
Figure A8: Distribution of network inputs of an SNN for the Tox21 data set. The plots show the distribution of network inputs z of the second layer of a typical Tox21 network. The red curves display a kernel density estimator of the network inputs and the black curve is the density of a standard normal distribution. Left panel: At initialization time before learning. The distribution of network inputs is close to a standard normal distribution. Right panel: After 40 epochs of learning. The distributions of network inputs is close to a normal distribution.
Distribution of network inputs. We empirically checked the assumption that the distribution of network inputs can well be approximated by a normal distribution. To this end, we investigated the density of the network inputs before and during learning and found that these density are close to normal distributions (see Figure A8).
94 | 1706.02515#357 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 358 | 94
# A4.4 HTRU2 data set: Hyperparameters
For the HTRU2 data set, the best hyperparameter setting was determined by a grid-search over all hyperparameter combinations using one of the 9 non-testing folds as validation fold in a nested cross-validation procedure. Concretely, if M was the testing fold, we used M â 1 as validation fold, and for M = 1 we used fold 10 for validation. The early stopping parameter was determined on the smoothed learning curves of 100 epochs of the validation set. Smoothing was done using moving averages of 10 consecutive values. We tested ârectangularâ and âconicâ layers â rectangular layers have constant number of hidden units in each layer, conic layers start with the given number of hidden units in the ï¬rst layer and then decrease the number of hidden units to the size of the output layer according to the geometric progession. All methods had the chance to adjust their hyperparameters to the data set at hand.
Table A21: Hyperparameters considered for self-normalizing networks on the HTRU2 data set. | 1706.02515#358 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 359 | Table A21: Hyperparameters considered for self-normalizing networks on the HTRU2 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Dropout rate Layer form {256, 512, 1024} {2, 4, 8, 16, 32} {0.1, 0.01, 1} { 0, 0.05} {rectangular, conic}
Table A22: Hyperparameters considered for ReLU networks with Microsoft initialization on the HTRU2 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Dropout rate Layer form {256, 512, 1024} {2, 4, 8, 16, 32} {0.1, 0.01, 1} {0, 0.5} {rectangular, conic}
Table A23: Hyperparameters considered for BatchNorm networks on the HTRU2 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Layer form {256, 512, 1024} {2, 4, 8, 16, 32} {0.1, 0.01, 1} {Batchnorm} {rectangular, conic}
95
Table A24: Hyperparameters considered for WeightNorm networks on the HTRU2 data set. | 1706.02515#359 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 360 | 95
Table A24: Hyperparameters considered for WeightNorm networks on the HTRU2 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Layer form {256, 512, 1024} {2, 4, 8, 16, 32} {0.1, 0.01, 1} {Weightnorm} {rectangular, conic}
Table A25: Hyperparameters considered for LayerNorm networks on the HTRU2 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Layer form {256, 512, 1024} {2, 4, 8, 16, 32} {0.1, 0.01, 1} {Layernorm} {rectangular, conic}
Table A26: Hyperparameters considered for Highway networks on the HTRU2 data set.
Hyperparameter Considered values Number of hidden layers Learning rate Dropout rate {2, 4, 8, 16, 32} {0.1, 0.01, 1} {0, 0.5}
Table A27: Hyperparameters considered for Residual networks on the HTRU2 data set. | 1706.02515#360 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 361 | Table A27: Hyperparameters considered for Residual networks on the HTRU2 data set.
Hyperparameter Considered values Number of hidden units Number of residual blocks Learning rate Block form Bottleneck {256, 512, 1024} {2, 3, 4, 8, 16} {0.1, 0.01, 1} {rectangular, diavolo} {0.25, 0.5}
96
# A5 Other ï¬xed points
A similar analysis with corresponding function domains can be performed for other ï¬xed points, for example for µ = ˵ = 0 and ν = Ëν = 2, which leads to a SELU activation function with parameters α02 = 1.97126 and λ02 = 1.06071.
# A6 Bounds determined by numerical methods
In this section we report bounds on previously discussed expressions as determined by numerical methods (min and max have been computed).
0(µ=0.06,Ï=0,ν=1.35,Ï =1.12) < âJ11 âµ < .00182415(µ=â0.1,Ï=0.1,ν=1.47845,Ï =0.883374) | 1706.02515#361 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 362 | 0.905413(µ=0.1,Ï=â0.1,ν=1.5,Ï =1.25) < â0.0151177(µ=â0.1,Ï=0.1,ν=0.8,Ï =1.25) < â0.015194(µ=â0.1,Ï=0.1,ν=0.8,Ï =1.25) < â0.0151177(µ=â0.1,Ï=0.1,ν=0.8,Ï =1.25) < â0.0151177(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < â0.00785613(µ=0.1,Ï=â0.1,ν=1.5,Ï =1.25) < 0.0799824(µ=0.1,Ï=â0.1,ν=1.5,Ï =1.25) < 0(µ=0.06,Ï=0,ν=1.35,Ï =1.12) < 0.0849308(µ=0.1,Ï=â0.1,ν=0.8,Ï =0.8) < | 1706.02515#362 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 363 | =1.12) < 0.0849308(µ=0.1,Ï=â0.1,ν=0.8,Ï =0.8) < â0.0600823(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < â0.0673083(µ=0.1,Ï=â0.1,ν=1.5,Ï =0.8) < â0.0600823(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < â0.0600823(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < â0.276862(µ=â0.01,Ï=â0.01,ν=0.8,Ï =1.25) < âJ11 âÏ âJ11 âν âJ11 âÏ âJ12 âµ âJ12 âÏ âJ12 âν âJ12 âÏ âJ21 âµ âJ21 âÏ âJ21 âν âJ21 âÏ âJ22 âµ | 1706.02515#363 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 364 | âÏ âJ21 âµ âJ21 âÏ âJ21 âν âJ21 âÏ âJ22 âµ âJ22 âÏ âJ22 âν âJ22 âÏ < 1.04143(µ=0.1,Ï=0.1,ν=0.8,Ï =0.8) < 0.0151177(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < 0.015194(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < 0.0151177(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < 0.0151177(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < 0.0315805(µ=0.1,Ï=0.1,ν=0.8,Ï =0.8) < 0.110267(µ=â0.1,Ï=0.1,ν=0.8,Ï =0.8) < | 1706.02515#364 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 368 | 97
OFu Or OP2 Ou Oia Ow OP2 Ov Oia Or Oa Ou Oa Ow Oa Ov Oa Or OFe2 Ou OFe2 Ow OFe2 Ov OFe2 Or <_ 0.015194(0.03749149348255419) <_ 0.0151177(0.031242911235461816) <_ 0.0151177(0.031242911235461816) <_ 0.0315805(0.21232788238624354) an <_ 0.110267(0.2124377655377270) <_0.0174802(0.02220441024325437) <_ 0.695766(1.146955401845684) <_ 0.0600823(0.14983446469110305) <_ 0.0673083(0.17980135762932363) <_ 0.0600823(0.14983446469110305) <_ 0.0600823(0.14983446469110305) <_ 0.562302(1.805740052651535) < 0.664051 (2.396685907216327)
# A7 References | 1706.02515#368 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 369 | # A7 References
[1] Abramowitz, M. and Stegun, I. (1964). Handbook of Mathematical Functions, volume 55 of Applied Mathematics Series. National Bureau of Standards, 10th edition.
[2] Ba, J. L., Kiros, J. R., and Hinton, G. (2016). Layer normalization. arXiv preprint arXiv:1607.06450.
[3] Bengio, Y. (2013). Deep learning of representations: Looking forward. In Proceedings of the First International Conference on Statistical Language and Speech Processing, pages 1â37, Berlin, Heidelberg.
[4] Blinn, J. (1996). Consider the lowly 2Ã2 matrix. IEEE Computer Graphics and Applications, pages 82â88.
[5] Bradley, R. C. (1981). Central limit theorems under weak dependence. Journal of Multivariate Analysis, 11(1):1â16.
[6] Cire¸san, D. and Meier, U. (2015). Multi-column deep neural networks for ofï¬ine handwritten chinese character classiï¬cation. In 2015 International Joint Conference on Neural Networks (IJCNN), pages 1â6. IEEE. | 1706.02515#369 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 370 | [7] Clevert, D.-A., Unterthiner, T., and Hochreiter, S. (2015). Fast and accurate deep network learning by exponential linear units (ELUs). 5th International Conference on Learning Representations, arXiv:1511.07289.
98
[8] Dugan, P., Clark, C., LeCun, Y., and Van Parijs, S. (2016). Phase 4: Dcl system using deep learning approaches for land-based or ship-based real-time recognition and localization of marine mammals-distributed processing and big data applications. arXiv preprint arXiv:1605.00982.
[9] Esteva, A., Kuprel, B., Novoa, R., Ko, J., Swetter, S., Blau, H., and Thrun, S. (2017). Nature, Dermatologist-level classiï¬cation of skin cancer with deep neural networks. 542(7639):115â118.
[10] Fernández-Delgado, M., Cernadas, E., Barro, S., and Amorim, D. (2014). Do we need hundreds of classiï¬ers to solve real world classiï¬cation problems. Journal of Machine Learning Research, 15(1):3133â3181. | 1706.02515#370 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 371 | [11] Goldberg, D. (1991). What every computer scientist should know about ï¬oating-point arithmetic. ACM Comput. Surv., 223(1):5â48.
[12] Graves, A., Mohamed, A., and Hinton, G. (2013). Speech recognition with deep recurrent neural networks. In IEEE International conference on acoustics, speech and signal processing (ICASSP), pages 6645â6649.
[13] Graves, A. and Schmidhuber, J. (2009). Ofï¬ine handwriting recognition with multidimensional recurrent neural networks. In Advances in neural information processing systems, pages 545â552.
[14] Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., Venugopalan, S., Widner, K., Madams, T., Cuadros, J., et al. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA, 316(22):2402â2410. | 1706.02515#371 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 372 | [15] Harrison, J. (1999). A machine-checked theory of ï¬oating point arithmetic. In Bertot, Y., Dowek, G., Hirschowitz, A., Paulin, C., and Théry, L., editors, Theorem Proving in Higher Order Logics: 12th International Conference, TPHOLsâ99, volume 1690 of Lecture Notes in Computer Science, pages 113â130. Springer-Verlag.
[16] He, K., Zhang, X., Ren, S., and Sun, J. (2015a). Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] He, K., Zhang, X., Ren, S., and Sun, J. (2015b). Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 1026â1034.
[18] Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8):1735â1780. | 1706.02515#372 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 373 | [19] Huval, B., Wang, T., Tandon, S., et al. (2015). An empirical evaluation of deep learning on highway driving. arXiv preprint arXiv:1504.01716.
[20] Ioffe, S. and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine Learning, pages 448â456.
[21] Kahan, W. (2004). A logarithm too clever by half. Technical report, University of California, Berkeley.
[22] Korolev, V. and Shevtsova, I. (2012). An improvement of the BerryâEsseen inequality with applications to Poisson and mixed Poisson random sums. Scandinavian Actuarial Journal, 2012(2):81â105.
[23] Krizhevsky, A., Sutskever, I., and Hinton, G. (2012). Imagenet classiï¬cation with deep convolu- tional neural networks. In Advances in Neural Information Processing Systems, pages 1097â1105.
[24] LeCun, Y. and Bengio, Y. (1995). Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995. | 1706.02515#373 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 374 | [25] LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature, 521(7553):436â444.
99
[26] Loosemore, S., Stallman, R. M., McGrath, R., Oram, A., and Drepper, U. (2016). The GNU C Library: Application Fundamentals. GNU Press, Free Software Foundation, 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA, 2.24 edition.
[27] Lyon, R., Stappers, B., Cooper, S., Brooke, J., and Knowles, J. (2016). Fifty years of pulsar candidate selection: From simple ï¬lters to a new principled real-time classiï¬cation approach. Monthly Notices of the Royal Astronomical Society, 459(1):1104â1123.
[28] Mayr, A., Klambauer, G., Unterthiner, T., and Hochreiter, S. (2016). DeepTox: Toxicity prediction using deep learning. Frontiers in Environmental Science, 3:80. | 1706.02515#374 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 375 | [29] Muller, J.-M. (2005). On the deï¬nition of ulp(x). Technical Report Research report RR2005-09, Laboratoire de lâInformatique du Parallélisme.
[30] Ren, C. and MacKenzie, A. R. (2007). Closed-form approximations to the error and comple- mentary error functions and their applications in atmospheric science. Atmos. Sci. Let., pages 70â73.
[31] Sak, H., Senior, A., Rao, K., and Beaufays, F. (2015). Fast and accurate recurrent neural network acoustic models for speech recognition. arXiv preprint arXiv:1507.06947.
[32] Salimans, T. and Kingma, D. P. (2016). Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pages 901â909.
[33] Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61:85â117.
[34] Silver, D., Huang, A., Maddison, C., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484â489. | 1706.02515#375 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 376 | [35] Srivastava, R. K., Greff, K., and Schmidhuber, J. (2015). Training very deep networks. In Advances in Neural Information Processing Systems, pages 2377â2385.
[36] Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104â3112.
[37] Wainberg, M., Alipanahi, B., and Frey, B. J. (2016). Are random forests truly the best classiï¬ers? Journal of Machine Learning Research, 17(110):1â5.
# List of Figures | 1706.02515#376 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 377 | # List of Figures
1 FNN and SNN trainin error curves . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Visualization of the mapping g . . . . . . . . . . . . . . . . . . . . . . . . . . . . A3 Graph of the main subfunction of the derivative of the second moment . . . . . . . erfc(x). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 5 30 37 38 56 57
# List of Tables
Comparison of seven FNNs on 121 UCI tasks . . . . . . . . . . . . . . . . . . . .
Comparison of FNNs at the Tox21 challenge dataset . . . . . . . . . . . . . . . . .
100
8 | 1706.02515#377 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 378 | 3 Comparison of FNNs and reference methods at HTRU2 . . . . . . . . . . . . . . . A4 Hyperparameters considered for self-normalizing networks in the UCI data sets. . . A5 Hyperparameters considered for ReLU networks in the UCI data sets. . . . . . . . A6 Hyperparameters considered for batch normalized networks in the UCI data sets. . A7 Hyperparameters considered for weight normalized networks in the UCI data sets. . A8 Hyperparameters considered for layer normalized networks in the UCI data sets. . . A9 Hyperparameters considered for Highway networks in the UCI data sets. . . . . . . A10 Hyperparameters considered for Residual networks in the UCI data sets. . . . . . . A11 Comparison of FNN methods on all 121 UCI data sets. . . . . . . . . . . . . . . . A12 Method comparison on small UCI data sets . . . . . . . . . . . . . . . . . . . . . A13 Method comparison on large UCI data sets . . . . . . . . . . . . . . . . . . . . . . . 91 A14 | 1706.02515#378 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 379 | . . A13 Method comparison on large UCI data sets . . . . . . . . . . . . . . . . . . . . . . . 91 A14 Hyperparameters considered for self-normalizing networks in the Tox21 data set. . A15 Hyperparameters considered for ReLU networks in the Tox21 data set. . . . . . . . A16 Hyperparameters considered for batch normalized networks in the Tox21 data set. . A17 Hyperparameters considered for weight normalized networks in the Tox21 data set. A18 Hyperparameters considered for layer normalized networks in the Tox21 data set. . A19 Hyperparameters considered for Highway networks in the Tox21 data set. . . . . . A20 Hyperparameters considered for Residual networks in the Tox21 data set. . . . . . A21 Hyperparameters considered for self-normalizing networks on the HTRU2 data set. A22 Hyperparameters considered for ReLU networks on the HTRU2 data set. . . . . . . A23 Hyperparameters considered for BatchNorm networks on the HTRU2 data set. . . . A24 Hyperparameters considered for WeightNorm networks on the HTRU2 data | 1706.02515#379 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 381 | A27 Hyperparameters considered for Residual networks on the HTRU2 data set. . . . . 96
101
# Brief index
Abramowitz bounds, 37
Banach Fixed Point Theorem, 13 bounds derivatives of Jacobian entries, 21 Jacobian entries, 23 mean and variance, 24 singular value, 25, 27
central limit theorem, 6 complementary error function bounds, 37 deï¬nition, 37 computer-assisted proof, 33 contracting variance, 29
deï¬nitions, 2 domain singular value, 19 Theorem 1, 12 Theorem 2, 12 Theorem 3, 13 dropout, 6
erf, 37 erfc, 37 error function bounds, 37 deï¬nition, 37 properties, 39 expanding variance, 32 experiments, 7, 85 astronomy, 8 HTRU2, 8, 95 hyperparameters, 95 methods compared, 7 Tox21, 7, 92 hyperparameters, 8, 92 UCI, 7, 85 details, 85 hyperparameters, 85 results, 86
initialization, 6
Jacobian, 20 bounds, 23 deï¬nition, 20 derivatives, 21 entries, 20, 23 singular value, 21 singular value bound, 25
lemmata, 19 Jacobian bound, 19
mapping g, 2, 4
deï¬nition, 11 mapping in domain, 29
# ee | 1706.02515#381 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1705.10528 | 0 | 7 1 0 2
y a M 0 3 ] G L . s c [
1 v 8 2 5 0 1 . 5 0 7 1 : v i X r a
# Constrained Policy Optimization
# Joshua Achiam 1 David Held 1 Aviv Tamar 1 Pieter Abbeel 1 2
# Abstract
For many applications of reinforcement learn- ing it can be more convenient to specify both a reward function and constraints, rather than trying to design behavior through the reward function. For example, systems that physically interact with or around humans should satisfy safety constraints. Recent advances in policy search algorithms (Mnih et al., 2016; Schulman et al., 2015; Lillicrap et al., 2016; Levine et al., 2016) have enabled new capabilities in high- dimensional control, but do not consider the con- strained setting. | 1705.10528#0 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 0 | # Low Impact Artiï¬cial Intelligences
Stuart Armstrongâ1,2 and Benjamin Levinsteinâ 1
7 1 0 2
1The Future of Humanity Institute, Faculty of Philosophy, University of Oxford, Suite 1, Littlegate House, 16/17 St Ebbes Street, Oxford OX1 1PT UK 2Machine Intelligence Research Institute, 2030 Addison Street #300, Berkeley, CA 94704
y a M 0 3 ] I A . s c [
2015
# Abstract
There are many goals for an AI that could become dangerous if the AI becomes superintelligent or otherwise powerful. Much work on the AI control problem has been focused on constructing AI goals that are safe even for such AIs. This paper looks at an alternative approach: deï¬ning a general concept of âlow impactâ. The aim is to ensure that a powerful AI which implements low impact will not modify the world extensively, even if it is given a simple or dangerous goal. The paper proposes various ways of deï¬ning and grounding low impact, and discusses methods for ensuring that the AI can still be allowed to have a (desired) impact despite the restriction. The end of the paper addresses known issues with this approach and avenues for future research.
1 v 0 2 7 0 1 . 5 0 7 1 : v i X r a
Keywords: low impact, AI, motivation, value, control
1
# 1 Introduction | 1705.10720#0 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 1 | In reinforcement learning (RL), agents learn to act by trial and error, gradually improving their performance at the task as learning progresses. Recent work in deep RL as- sumes that agents are free to explore any behavior during learning, so long as it leads to performance improvement. In many realistic domains, however, it may be unacceptable to give an agent complete freedom. Consider, for example, an industrial robot arm learning to assemble a new product in a factory. Some behaviors could cause it to damage it- self or the plant around itâor worse, take actions that are harmful to people working nearby. In domains like this, safe exploration for RL agents is important (Moldovan & Abbeel, 2012; Amodei et al., 2016). A natural way to in- corporate safety is via constraints. | 1705.10528#1 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 1 | 1 v 0 2 7 0 1 . 5 0 7 1 : v i X r a
Keywords: low impact, AI, motivation, value, control
1
# 1 Introduction
Imagine an artiï¬cial intelligence that has been given a goal such as âmake pa- perclipsâ, âï¬lter spam in this accountâ, or âcure this personâs cancerâ. If this AI is not very powerful, it is likely to attempt to achieve its goals in the ways we intend: improving industrial production, analysing and selectively ï¬ltering in- coming messages, or looking for compounds able to diï¬erentially attack cancer cells.
If the AI becomes very powerful, however, these goals all become problematic [Bos14]. The goal âmake paperclipsâ is perfectly compatible with a world in which the AI expands across the Earth, taking control of its resources to start an intense mass production of paperclips, while starting to launch colonisation projects for the other planets to use their resources for the same purposes, and so on. In fact, a naive version of the goal âmake paperclipsâ mandates such actions. Similarly, âï¬lter spamâ is compatible with shutting down the internet entirely, | 1705.10720#1 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 2 | We propose Constrained Policy Optimization (CPO), the ï¬rst general-purpose policy search al- gorithm for constrained reinforcement learning with guarantees for near-constraint satisfaction at each iteration. Our method allows us to train neu- ral network policies for high-dimensional control while making guarantees about policy behavior all throughout training. Our guarantees are based on a new theoretical result, which is of indepen- dent interest: we prove a bound relating the ex- pected returns of two policies to an average diver- gence between them. We demonstrate the effec- tiveness of our approach on simulated robot lo- comotion tasks where the agent must satisfy con- straints motivated by safety.
# 1. Introduction
A standard and well-studied formulation for reinforcement learning with constraints is the constrained Markov Deci- sion Process (CMDP) framework (Altman, 1999), where agents must satisfy constraints on expectations of auxil- liary costs. Although optimal policies for ï¬nite CMDPs with known models can be obtained by linear program- ming, methods for high-dimensional control are lacking. | 1705.10528#2 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 2 | âEmail: [email protected]; Corresponding author â Email: [email protected]
1
and âcure this personâs cancerâ is compatible with killing her and destroying all the cells in her body.
There are several proposed approaches to combat this issue. The most stan- dard is to add something to the goal, ï¬eshing it out so that it includes safety components (âand donât kill anyone, or inadvertently cause their deaths, or...â). As the AIâs power increases, its potential inï¬uence over the world increases as well, and the safety components need to be ï¬eshed out ever more (â...and donât imprison people, or cause a loss of happiness or perceived liberty or free will, or...â). The âFriendly AIâ approach aims roughly to specify these safety com- ponents in as much speciï¬c detail as possible [Yud08]. Other approaches aim to instil the these components via implicit or explicit learning and feedback [Dew11, Arm15]. | 1705.10720#2 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 3 | Currently, policy search algorithms enjoy state-of-the- art performance on high-dimensional control tasks (Mnih et al., 2016; Duan et al., 2016). Heuristic algorithms for policy search in CMDPs have been proposed (Uchibe & Doya, 2007), and approaches based on primal-dual meth- ods can be shown to converge to constraint-satisfying poli- cies (Chow et al., 2015), but there is currently no approach for policy search in continuous CMDPs that guarantees ev- ery policy during learning will satisfy constraints. In this work, we propose the ï¬rst such algorithm, allowing appli- cations to constrained deep RL.
Recently, deep reinforcement learning has enabled neural network policies to achieve state-of-the-art performance on many high-dimensional control tasks, including Atari games (using pixels as inputs) (Mnih et al., 2015; 2016), robot locomotion and manipulation (Schulman et al., 2015; Levine et al., 2016; Lillicrap et al., 2016), and even Go at the human grandmaster level (Silver et al., 2016).
1UC Berkeley 2OpenAI. Correspondence to: Joshua Achiam <[email protected]>.
Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, 2017. JMLR: W&CP. Copyright 2017 by the author(s). | 1705.10528#3 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 3 | This paper takes a diï¬erent tack. Instead of specifying the safety compo- nents, it aims to ensure AI has a low impact on the world. Given this low impact, many otherwise unsafe goals become safe even with a very powerful AI. Such an AI would manufacture a few more paperclips/ï¬lter a few more messages/kill a few cancer cells, but would otherwise not take any disruptive action.
The ï¬rst challenge is, of course, to actually deï¬ne low impact. Any action (or inaction) has repercussions that percolate through the future light-cone, changing things subtly but irreversibly. It is hard to capture the intuitive human idea of âa small changeâ. | 1705.10720#3 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 4 | Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, 2017. JMLR: W&CP. Copyright 2017 by the author(s).
Driving our approach is a new theoretical result that bounds the difference between the rewards or costs of two differ- ent policies. This result, which is of independent interest, tightens known bounds for policy search using trust regions (Kakade & Langford, 2002; Pirotta et al., 2013; Schulman et al., 2015), and provides a tighter connection between the theory and practice of policy search for deep RL. Here, we use this result to derive a policy improvement step that guarantees both an increase in reward and satisfaction of constraints on other costs. This step forms the basis for our algorithm, Constrained Policy Optimization (CPO), which
Constrained Policy Optimization
computes an approximation to the theoretically-justiï¬ed update.
In our experiments, we show that CPO can train neural network policies with thousands of parameters on high- dimensional simulated robot locomotion tasks to maximize rewards while successfully enforcing constraints. | 1705.10528#4 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 4 | There are a few intuitive ways in which an action can have a low impact, though, which we examine in some depth in Section 3. For example, if we can describe the universe in terms of a huge number of disparate but well-chosen variables and the action has little impact on their values, then it was not of high impact. We can also assess whether knowing the action is particularly âimportantâ in terms of predicting the future, or whether we can see if the actions are likely to be detectable at a later date. If the action is such that any diï¬erence to the universe is lost in entropy or absorbed into a chaotic and unpredictable process, it certainly has a low impact. Finally, we can also abstractly compare the features of probability distributions of future worlds given the action or not. The second challenge, tackled in Section 4, is to ï¬gure out how to ensure that the AIâs impact is not too low â that we can still get useful work out of the AI, without risking a larger or negative impact. Although low impact seems to preclude any action of signiï¬cance on the part of the AI, there are a number of ways around this limitation. Unlike the bad AI impacts that we are trying | 1705.10720#4 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 5 | In our experiments, we show that CPO can train neural network policies with thousands of parameters on high- dimensional simulated robot locomotion tasks to maximize rewards while successfully enforcing constraints.
sâ given that the previous state was s and the agent took action a in s), and 4 : S -+ [0,1] is the starting state distribution. A stationary policy 7 : S â P(A) is a map from states to probability distributions over actions, with m(a|s) denoting the probability of selecting action a in state s. We denote the set of all stationary policies by II.
# 2. Related Work
Safety has long been a topic of interest in RL research, and a comprehensive overview of safety in RL was given by (Garc´ıa & Fern´andez, 2015). | 1705.10528#5 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 6 | Safety has long been a topic of interest in RL research, and a comprehensive overview of safety in RL was given by (Garc´ıa & Fern´andez, 2015).
Safe policy search methods have been proposed in prior work. Uchibe and Doya (2007) gave a policy gradient al- gorithm that uses gradient projection to enforce active con- straints, but this approach suffers from an inability to pre- vent a policy from becoming unsafe in the ï¬rst place. Bou Ammar et al. (2015) propose a theoretically-motivated pol- icy gradient method for lifelong learning with safety con- straints, but their method involves an expensive inner loop optimization of a semi-deï¬nite program, making it unsuited for the deep RL setting. Their method also assumes that safety constraints are linear in policy parameters, which is limiting. Chow et al. (2015) propose a primal-dual sub- gradient method for risk-constrained reinforcement learn- ing which takes policy gradient steps on an objective that trades off return with risk, while simultaneously learning the trade-off coefï¬cients (dual variables). | 1705.10528#6 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 6 | The ï¬nal, brief, section looks at some of the problems and unresolved issues with the current setup â and hence the directions for future research.
# 2 The General Framework
# 2.1 The penalty function
Although determining what exactly counts as âimpactâ will be a thorny issue, we can nonetheless characterise the approach abstractly. The basic idea is that
2
the AI has some active goal, such as cure cancer or ï¬lter spam, but it wants to pursue this goal without changing the world in any important way. We can then describe its utility function as follows:
U = u â µR. (1)
The function u is a standard utility function that gives the AI its active goal. The function R is the penalty function, penalising the AI for having a large impact. The number µ is some scaling factor, setting the importance of low impact relative to the AIâs active goal u.
In order to prevent the AI accepting a large R penalty in exchange for a large u gain, we will want to deï¬ne a bounded u, such that performance close to the maximum bound is not too diï¬cult to obtain. There is no such bound on R, of course: the more impact the AI has, the more it gets penalised.1
# 2.2 Deï¬ning the alternatives | 1705.10720#6 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 7 | In reinforcement learning, we aim to select a policy 7 which maximizes a performance measure, J(7), which is typically taken to be the infinite horizon discounted to- tal return, J(r) = E [oo 7'R(se,@e, 8r41)]. Here T.T 7 ⬠[0,1) is the discount factor, 7 denotes a trajectory (7 = (80,40, 81,-.-)), and 7 ~ 7 is shorthand for indi- cating that the distribution over trajectories depends on 7: 80 © Hs a ~ T(-|S84), Sega ~ P(-|S2, a2).
Letting R(Ï ) denote the discounted return of a trajec- . tory, we express the on-policy value function as V Ï(s) = EÏ â¼Ï[R(Ï )|s0 = s] and the on-policy action-value func- . tion as QÏ(s, a) = EÏ â¼Ï[R(Ï )|s0 = s, a0 = a]. The advantage function is AÏ(s, a)
Also of interest is the discounted future state distribution, dâ¢, defined by dâ¢(s) = (1-7) Deg 7â P(s: = 8|7). Ital- lows us to compactly express the difference in performance between two policies 7â, 7 as | 1705.10528#7 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 7 | # 2.2 Deï¬ning the alternatives
To deï¬ne low impact, we ï¬rst will need a baseline for comparison. What is a low impact, as opposed to a non-low one? The most natural alternative, and the one weâll use, is the world in which the AI is never successfully turned on in the ï¬rst place, or, to be more precise: some prior probability distribution P over the set of worlds W conditional on the AI not having been turned on. An AI that was never turned on is assumed to have very low impact; the behaviour of the active AI is compared with this baseline.
For such a distribution to make sense, weâll assume the turning on of the AI does not occur with probability 1. For instance, we can make it depend on a signal going through a speciï¬c wire, which goes through some noisy environment â say, an unstable gas â that has a tiny chance of interrupting the signal. If the signal is interrupted, the AI is not turned on. | 1705.10720#7 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 8 | 1 "(s,a (m7) = men [A(s,a)], qd) ann J(n')-âJ
Some approaches speciï¬cally focus on application to the deep RL setting. Held et al. (2017) study the problem for robotic manipulation, but the assumptions they make re- strict the applicability of their methods. Lipton et al. (2017) use an âintrinsic fearâ heuristic, as opposed to constraints, to motivate agents to avoid rare but catastrophic events. Shalev-Shwartz et al. (2016) avoid the problem of enforc- ing constraints on parametrized policies by decomposing âdesiresâ from trajectory planning; the neural network pol- icy learns desires for behavior, while the trajectory plan- ning algorithm (which is not learned) selects ï¬nal behavior and enforces safety constraints.
In contrast to prior work, our method is the ï¬rst policy search algorithm for CMDPs that both 1) guarantees con- straint satisfaction throughout training, and 2) works for arbitrary policy classes (including neural networks).
where by a ~ 7â, we mean a ~ z/(-|s), with explicit notation dropped to reduce clutter. For proof of (1), see (Kakade & Langford, 2002) or Section 10 in the supple- mentary material. | 1705.10528#8 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 8 | Designate the event of the signal passing through the wire as X (which then turns on the AI), and the signal being blocked as ¬X (which then leaves the AI oï¬). We then want the AI to have a low impact on the world, with a low impact being informally deï¬ned as the world, conditional on X, being ânot too dissimilarâ from the world, conditional on ¬X. The purpose of this paper is to make this deï¬nition rigorous2.
Since conditional probability does not depend on the probability of the event being conditioned on, we would want the probability of X to be almost 1 â the AI is almost certain to be turned on. This is consistent with the fact that, having built an AI, the designers would certainly want to make use of it3.
1Note that equation (1), if it works as intended, makes the AI relatively transparent to humans. The active goal u can be simple, while the intent of low impact is clear. | 1705.10720#8 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 9 | # 4. Constrained Markov Decision Processes
A constrained Markov decision process (CMDP) is an MDP augmented with constraints that restrict the set of al- lowable policies for that MDP. Specifically, we augment the MDP with a set C of auxiliary cost functions, C1, ...,Cm (with each one a function C; : S x A x S â R map- ping transition tuples to costs, like the usual reward), and limits dj,...,dm. Let Jo,(m) denote the expected dis- counted return of policy 7 with respect to cost function Ci: Jom) = am Doro Y'Ci(se, a2, $141)]- The set of feasible stationary policies for a CMDP is then
# Ie
. = {Ï â Î : âi, JCi(Ï) ⤠di} ,
# 3. Preliminaries
and the reinforcement learning problem in a CMDP is
A Markov decision process (MDP) is a_ tuple, (S,A,R,P,), where S' is the set of states, A is the set of actions, R: S x A x S â R is the reward function, P:SxAxS â [0,1] is the transition probability function (where P(sâ|s, a) is the probability of transitioning to state | 1705.10528#9 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 9 | 1Note that equation (1), if it works as intended, makes the AI relatively transparent to humans. The active goal u can be simple, while the intent of low impact is clear.
2Instead of comparing the world conditional on X with the world conditional on ¬X, it may be desirable to consider various kinds of subjunctive or counterfactual suppositions instead if weâre especially interested in, say, Xâs expected causal consequences and not in its evidential impact. For the sake of simplicity, however, weâll restrict attention to the standard notion of conditional probability instead in this report.
3There are many subtleties here, the main ones of being that if ¬X happens, humans are likely to question their probability estimates, and will likely just try turning it on again. Humans concluding that something is wrong can be dealt with via methods similar to http://lesswrong.com/r/discussion/lw/lxl/humans_get_different_counterfactuals/. Preventing the AI from taking into account future human attempts to turn it on again could be addressed by adapting the methods proposed in http://lesswrong.com/r/discussion/lw/
3
# 2.3 Boxing the AI | 1705.10720#9 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 10 | Ïâ = arg max ÏâÎ C J(Ï).
The choice of optimizing only over stationary policies is justiï¬ed: it has been shown that the set of all optimal poli- cies for a CMDP includes stationary policies, under mild
Constrained Policy Optimization
technical conditions. For a thorough review of CMDPs and CMDP theory, we refer the reader to (Altman, 1999).
case performance and worst-case constraint violation with values that depend on a hyperparameter of the algorithm.
We refer to JCi as a constraint return, or Ci-return for short. Lastly, we deï¬ne on-policy value functions, action- value functions, and advantage functions for the auxiliary costs in analogy to V Ï, QÏ, and AÏ, with Ci replacing R: respectively, we denote these by V Ï Ci
# 5. Constrained Policy Optimization | 1705.10528#10 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 10 | 3
# 2.3 Boxing the AI
One useful precaution is to âboxâ the AI, conï¬ning it to a sealed environment somewhat isolated from the outside world. Unlike other boxing suggestions [ASB12, Bos14, Yam12], the aim here is not to prevent the AI from having an impact by conï¬ning it. Rather, it is to minimise the diï¬erence between worlds with X and ¬X. The box is not to prevent the AI from acting (the deï¬nition of low impact will ensure that), but rather to prevent us from seeing that the AI is active. Thus any visible impact of the AI will come only from via the AIâs decisions, rather than from itâs mere existence.
# 3 Diï¬erent measures of low impact
# 3.1 The fundamental challenge | 1705.10720#10 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 11 | # 5. Constrained Policy Optimization
For large or continuous MDPs, solving for the exact opti- mal policy is intractable due to the curse of dimensionality (Sutton & Barto, 1998). Policy search algorithms approach this problem by searching for the optimal policy within a set Πθ â Î of parametrized policies with parameters θ (for example, neural networks of a ï¬xed architecture). In local policy search (Peters & Schaal, 2008), the policy is iteratively updated by maximizing J(Ï) over a local neigh- borhood of the most recent iterate Ïk:
Ïk+1 = arg max ÏâΠθ J(Ï) s.t. D(Ï, Ïk) ⤠δ, (2)
where D is some distance measure, and 5 > 0 is a step size. When the objective is estimated by linearizing around Tr as J(m,) + g7(0 â O;), g is the policy gradient, and the standard policy gradient update is obtained by choosing D(x, 7%) = ||@ â 9% ||2 (Schulman et al., 2015). | 1705.10528#11 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 11 | # 3 Diï¬erent measures of low impact
# 3.1 The fundamental challenge
The fundamental challenge of developing the low impact approach is that, from a physical point of view, everything done or not done has an impact (a prob- abilistic one, at least), across all possible futures. Therefore the world, given X, must be fundamentally diï¬erent from the world, given ¬X, simply because of the diï¬erence between X and ¬X. Particles will be arranged diï¬erently in these diï¬erent scenarios, and this eï¬ect will compound the further we get from the event.
What weâre really interested in âno important impactâ. It matters little if the atoms that deï¬ne the US president are a little to the left of where they would be otherwise. It matters a lot that the AI doesnât escape from its box, eliminate all humans and optimise the universe in the service of goal u. | 1705.10720#11 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 12 | In local policy search for CMDPs, we additionally require policy iterates to be feasible for the CMDP, so instead of optimizing over Πθ, we optimize over Πθ ⩠ΠC:
Ïk+1 = arg max ÏâΠθ J(Ï) s.t. JCi (Ï) ⤠di i = 1, ..., m (3) D(Ï, Ïk) ⤠δ.
This update is difï¬cult to implement in practice because it requires evaluation of the constraint functions to deter- mine whether a proposed point Ï is feasible. When using sampling to compute policy updates, as is typically done in high-dimensional control (Duan et al., 2016), this requires off-policy evaluation, which is known to be challenging (Jiang & Li, 2015). In this work, we take a different ap- proach, motivated by recent methods for trust region opti- mization (Schulman et al., 2015). | 1705.10528#12 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 12 | A second component of this challenge is that the AI may end up creating an importantly diï¬erent world, simply because some processes such as the weather are chaotic and X versus ¬X change the initial conditions. Yet in these cases, its impact should nonetheless count as low. Suppose, for instance, the upcoming election would be very close. The AI makes breakfast on election day for a capricious and unpredictable voter who ends up voting for Alice instead of Bob in part because he had apricots instead of biscuits to eat. As a result, Alice is elected instead of Bob. Although the eï¬ect of the breakfast decision is large, it ought not be considered âhigh impactâ, since if an election was this close, it could be swung by all sorts of minor eï¬ects. Weâll therefore investigate probabilistic approaches: what sorts of changes can be predicted ex ante if X is true?
# 3.2 Coarse graining: Twenty billion questions | 1705.10720#12 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 13 | To prove the performance guarantees associated with our surrogates, we ï¬rst prove new bounds on the difference in returns (or constraint returns) between two arbitrary stochastic policies in terms of an average divergence be- tween them. We then show how our bounds permit a new analysis of trust region methods in general: speciï¬cally, we prove a worst-case performance degradation at each up- date. We conclude by motivating, presenting, and proving gurantees on our algorithm, Constrained Policy Optimiza- tion (CPO), a trust region method for CMDPs.
# 5.1. Policy Performance Bounds
In this section, we present the theoretical foundation for our approachâa new bound on the difference in returns between two arbitrary policies. This result, which is of in- dependent interest, extends the works of (Kakade & Lang- ford, 2002), (Pirotta et al., 2013), and (Schulman et al., 2015), providing tighter bounds. As we show later, it also relates the theoretical bounds for trust region policy im- provement with the actual trust region algorithms that have been demonstrated to be successful in practice (Duan et al., 2016). In the context of constrained policy search, we later use our results to propose policy updates that both improve the expected return and satisfy constraints. | 1705.10528#13 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 13 | # 3.2 Coarse graining: Twenty billion questions
One way to solve the fundamental challenge is ï¬rst to ï¬nd a way of âcoarse- grainingâ the set of worlds. That is, we partition the set of worlds into small cells, and any two elements of the same cell count as equivalent for our purposes. Generally these cells will be determined by the values of certain variables or characteristics. We can then measure impact in terms of the AIâs expected eï¬ects over this partition. Thus if A is any element of this partition, we typically
luy/acaucal_trade_barriers/. But a detailed discussion of this is beyond the scope of this current report.
4
expect the probabilities P (A|X) and P (A|¬X) to be close.4 | 1705.10720#13 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 14 | The following theorem connects the difference in returns (or constraint returns) between two arbitrary policies to an average divergence between them. Theorem 1. For any function f : S > R and any policies w and nm, define 57(s,a, 8â) = R(s,a, sâ) + yf(s') â f(s), os = max |Eavnâ,s'~Pl5¢ (8, a, 8â)]| ,
os = max |Eavnâ,s'~Pl5¢ (8, a, 8â)]| ,
Ly p(tâ) = Er (aS - 1) 5y(s,a, â| _ and s'nP
Las) 5295 1-7 ~ (9) where Dry (x'||7)[s] = (1/2) 0, |7â(als) â r(als)| is the total variational divergence between action distribu- tions at s. The following bounds hold: Dz ,(x') = EB. [Drv(a'llnisl]; | 1705.10528#14 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 14 | 4
expect the probabilities P (A|X) and P (A|¬X) to be close.4
The purpose of coarse graining is to deï¬ne the world in suï¬cient detail that the AI cannot have a large impact without disrupting most of those details. The characteristics used must be as broad and as diverse as possible, making it impossible for the AI to grain great power without disrupting some of them.5 For instance, we could use the air pressure in Dhaka, the average night-time lu- minosity at the South Pole, the rotational speed of Io, and the closing numbers of the Shanghai stock exchange. To be conï¬dent that we can suï¬ciently con- strain a super-intelligent AI, weâll need millions if not billions of these variables, separating the universes into uncountable numbers of diï¬erent cells6. | 1705.10720#14 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 15 | We develop a principled approximation to (3) with a par- ticular choice of D, where we replace the objective and constraints with surrogate functions. The surrogates we choose are easy to estimate from samples collected on Ïk, and are good local approximations for the objective and constraints. Our theoretical analysis shows that for our choices of surrogates, we can bound our updateâs worstDy y(n") > I(a") â I(m) > Dz y(n"). (4)
Furthermore, the bounds are tight (when r' = 7, all three expressions are identically zero).
Before proceeding, we connect this result to prior work. By bounding the expectation E,.a* [Drv (xâ||7)[s]] with max, Dry (nâ||7)[s], picking f = Vâ, and bounding ef.
Constrained Policy Optimization
to get a second factor of max, Dry (xâ||7)[s], we recover (up to assumption-dependent factors) the bounds given by Pirotta et al. (2013) as Corollary 3.6, and by Schulman et al. (2015) as Theorem la.
Corollary 3. In bounds (4), (5), and (6), make the substi- tution | 1705.10528#15 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 15 | Of course, we have to take care in deciding which characteristics to use. This procedure doesnât work if some variables are too directly connected with X or ¬X. The electricity consumption of the AI, for example, cannot be a variable, but the electricity consumption of the entire county could be if the AIâs usage is likely to be lost in the noise. But we can hope that for âlarge scaleâ issues, that the universe is âroughlyâ unchanged given X and ¬X.
For any world w, we can deï¬ne a world vector Vw which is the values taken by the billions of chosen variables7. We can then make use of vectors to coarse grain the space of worlds, deï¬ning an equivalence relation:
# v â¼= w iï¬ Vv = Vw
The cells are the equivalence classes of this relation. This allows us to make statements like âthe probability of w is equal whether or not the AI was turned onâ. Without the course graining, P (w|X) = 0 or P (w|¬X) = 0, depending on whether or not w contained the AI. But with the coarse graining, the statement becomes:
P (Vw|X) = P (Vw|¬X) | 1705.10720#15 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 16 | Corollary 3. In bounds (4), (5), and (6), make the substi- tution
1 B [Drv (aly) + EB, (acl snd*⢠2 swdâ¢
The choice of f = Vâ allows a useful form of the lower bound, so we give it as a corollary. Corollary 1. For any policies nâ,7, with oa maxs |Ea.xâ[A7(s,a)]|,
. =
The resulting bounds hold.
# 5.2. Trust Region Methods
â
# Hn!) I(x)
1 2 5 > ae B.A ee 3 1c âDrv(@'lnyfsl] - D ant
Trust region algorithms for reinforcement learning (Schul- man et al., 2015; 2016) have policy updates of the form
Ïk+1 = arg max ÏâΠθ E sâ¼dÏk aâ¼Ï [AÏk (s, a)] (8)
s.t. ¯DKL(Ï||Ïk) ⤠δ, | 1705.10528#16 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 16 | P (Vw|X) = P (Vw|¬X)
And, as long as those variable values are possible given X and given ¬X, the above formula makes sense.
Then we can formulate low impact as some sort of measure of the diï¬erence between the expected worlds given X and ¬X. The lâ norm, for instance, could work:
R = max w |P (Vw|X) â P (Vw|¬X)| (2)
The âboxâ deï¬ned by this norm is illustrated in ï¬gure 1. âSofterâ versions of this maximum norm could work as well.
4Note that the AIâs limitations make this approach easier. If the AI has diï¬culty predicting A precisely (maybe because A is chaotic or because of the fundamental entropy limitations on information), then those two probabilities are likely to be closer than otherwise. This helps to distinguish small inadvertent impacts the AI might have from the larger deliberate ones we want to avoid. | 1705.10720#16 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 17 | s.t. ¯DKL(Ï||Ïk) ⤠δ,
The bound (5) should be compared with equation (1). The term (1 â y)~'Eswa*,axn/[A"(s, )] in (5) is an approxi- mation to J(zâ) â J(7), using the state distribution dâ in- stead of dâ¢â, which is known to equal .J(â) â J(z) to first order in the parameters of 7â on a neighborhood around 7 (Kakade & Langford, 2002). The bound can therefore be viewed as describing the worst-case approximation error, and it justifies using the approximation as a surrogate for I(x") â Jn).
Equivalent expressions for the auxiliary costs, based on the upper bound, also follow immediately; we will later use them to make guarantees for the safety of CPO. Corollary 2. For any policies x',x, and any cost func- tion Cj, with &, = maxs |Eavxâ[AZ, (s, @)]|, ing bound holds: | 1705.10528#17 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 17 | 5Itâs important to note that an extremely powerful AI could potentially gain control of the world and set the characteristics to whatever values it wants. Therefore, the purpose of the coarse graining approach is to catch the AI before it initiates sweeping changes to the world. 6Though not uncountable in the sense of an uncountable inï¬nity, the number of cells is exponential in the number of characteristics/variables, and is thus easily larger than the number of particles in the reachable universe. Therefore, as we add variables, the diï¬culty for the AI to outsmart us and ï¬nd an unintended high-impact way of maximising u should grow fairly rapidly, since all constraints must be met simultaneously.
7Continuous variables will need a sensible binning.
5
(HIGH vim â seace) aX: No AI / SAFE ZONE Pam a
Figure 1: The many variables deï¬ne a âboxâ in high dimensional space, which includes the ¬X point where there is no AI. The AI must maneuver within this box, which hopefully should reside wholly within the safe zone. | 1705.10720#17 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 18 | where ¯DKL(Ï||Ïk) = Esâ¼Ïk [DKL(Ï||Ïk)[s]], and δ > 0 is the step size. The set {Ïθ â Πθ : ¯DKL(Ï||Ïk) ⤠δ} is called the trust region.
The primary motivation for this update is that it is an ap- proximation to optimizing the lower bound on policy per- formance given in (5), which would guarantee monotonic performance improvements. This is important for opti- mizing neural network policies, which are known to suffer from performance collapse after bad updates (Duan et al., 2016). Despite the approximation, trust region steps usu- ally give monotonic improvements (Schulman et al., 2015; Duan et al., 2016) and have shown state-of-the-art perfor- mance in the deep RL setting (Duan et al., 2016; Gu et al., 2017), making the approach appealing for developing pol- icy search methods for CMDPs.
To.(a') â Jo,(m) 1 Qe i, Spay Be Aes) + Ty Prv (a'llnyls] ant (6) | 1705.10528#18 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 18 | An important feature of the approach is that we have no need to measure these billion variables precisely. They are simply used to deï¬ne the coarse graining, with the AI then trying to get its best estimate of R. Nor do they need to perfectly deï¬ne the variables: the closing numbers of the Shanghai stock exchange could be deï¬ned as a speciï¬c output along a speciï¬c wire. In worlds where the stock exchange didnât exist, that input would correspond to something else entirely â maybe blank signals, maybe noise. But this shouldnât matter, as it still coarse grains âworlds with stock exchanges emitting along this wire, given Xâ together in the same cell with âworlds with stock exchanges emitting along this wire, given ¬Xâ and similarly for worlds without8.
# 3.3 The importance of knowing X | 1705.10720#18 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 19 | To.(a') â Jo,(m) 1 Qe i, Spay Be Aes) + Ty Prv (a'llnyls] ant (6)
The bounds we have given so far are in terms of the TV-divergence between policies, but trust region methods constrain the KL-divergence between policies, so bounds that connect performance to the KL-divergence are de- sirable. We make the connection through Pinskerâs in- equality (Csiszar & K6rner, 1981): for arbitrary distribu- tions p, q, the TV-divergence and KL-divergence are related by Drv (pllq) < WDxx(p||q)/2. Combining this with Jensenâs inequality, we obtain
E, [Drv(e'lln)isl] <_B, ; Dt < 5,8, [Pxeta'llaisl]
From (7) we immediately obtain the following.
(7) | 1705.10528#19 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 19 | # 3.3 The importance of knowing X
A related option is to measure speciï¬cally the informational value of the AI on things we might care about. The intuitive idea is that if the AI is high impact, then knowing whether or not the AI was turned on tells you a lot more about the future of the world than another generic piece of information. We can then set the penalty R to measure the importance of X/¬X with respect to other pieces of evidence. If the AI has truly low impact, then if presented with a large number of facts about the world, X will not stand out as being more informative than other facts.
How would we measure importance? One approach is to look at a large set U of diï¬erent utility functions, and check that knowing X makes little expected diï¬erence for any of them9.
8Note that this is in contrast with more traditional AI approaches, where the terms have to be correctly deï¬ned [Yud08] or grounded [Har90]. The purpose is not to have the meaning of each variable deï¬ned precisely, but to capture so many variables that they limit the AIâs ability to have large impacts.
9One method for choosing U is to have the AI or some other superintelligence select a class of utility functions that it thinks is likely to contain the set of actual or potential human
6 | 1705.10720#19 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 20 | From (7) we immediately obtain the following.
(7)
Until now, the particular choice of trust region for (8) was heuristically motivated; with (5) and Corollary 3, we are able to show that it is principled and comes with a worst- case performance degradation guarantee that depends on δ. Proposition 1 (Trust Region Update Performance). Sup- pose Ïk, Ïk+1 are related by (8), and that Ïk â Πθ. A lower bound on the policy performance difference between Ïk and Ïk+1 is
â
âV25ye7H1 _ \j> Yee J) 2 Ga I(Te41) (9)
2 Ga |Egvn,,, [Aâ¢(s,a)]].# where eâ¢*+1 = maxs
Proof. Ïk is a feasible point of (8) with objective value 0, so Esâ¼dÏk ,aâ¼Ïk+1 [AÏk (s, a)] ⥠0. The rest follows by (5) and Corollary 3, noting that (8) bounds the average KL- divergence by δ. | 1705.10528#20 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 20 | 9One method for choosing U is to have the AI or some other superintelligence select a class of utility functions that it thinks is likely to contain the set of actual or potential human
6
When measuring importance, we could either check the expected diï¬erence of knowing X, or the expected importance of other facts, conditional on knowing X. More formally, let F be a large set of facts compatible with both X and ¬X, and deï¬ne:
R= max _|E(w'|SUX) - E(w'|S U7X)| ul CU,SCF
# 3.4 Undetectable means unimpactful?
This sections looks at another way of deï¬ning low impact: undetectability. If, during the 19th century, there was an inhabitant of London, and there now remains no record whatsoever of their existence, it is likely that they had a very low impact. Presume that we are given a partial description of them, and asked, from the point of view of today and all the evidence we have, whether they existed or not. If we really cannot tell either way, this seems a deï¬nitional example of low impact: their potential impact is literally lost in the noise of the world10. | 1705.10720#20 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 21 | This result is useful for two reasons: 1) it is of independent interest, as it helps tighten the connection between theory and practice for deep RL, and 2) the choice to develop CPO as a trust region method means that CPO inherits this per- formance guarantee.
Constrained Policy Optimization
# 5.3. Trust Region Optimization for Constrained MDPs
# 6. Practical Implementation
Constrained policy optimization (CPO), which we present and justify in this section, is a policy search algorithm for CMDPs with updates that approximately solve (3) with a particular choice of D. First, we describe a policy search update for CMDPs that alleviates the issue of off-policy evaluation, and comes with guarantees of monotonic per- formance improvement and constraint satisfaction. Then, because the theoretically guaranteed update will take too- small steps in practice, we propose CPO as a practical ap- proximation based on trust region methods. | 1705.10528#21 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 21 | How could this be formalised? Suppose that the AI uses a probability esti- mator Pâ. This estimator is bounded by practical concer: s any real-world agent can only have bounded rationality. Its unbounded equivalent is P. And these estimators have some level of reflection [CYHBT3], so Pâ can express some estimates over the outcome of Pâ (self-reflection) and P (modelling of a superior rational agent). Let b designate background information about the world (prior to X), and let G designate the set of events in a slice S' of event X/>Xâs future light-cone â maybe the slice 50 (time-like) years in the future.
Then what we would want is that P(g|X,b) = P(g|7X,b) for g ⬠G â that the likely future is identical whether or not the AI is turned on or not (as assessed by the unbounded version of the agent, with access to all of $). If we designate the expectation estimator derived from Pâ as Eâ, this reduces to wanting:
R=E'(\P(g|X,b) â P(g|>X,)|) = 0. | 1705.10720#21 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 22 | In this section, we show how to implement an approxima- tion to the update (10) that can be efï¬ciently computed, even when optimizing policies with thousands of parame- ters. To address the issue of approximation and sampling errors that arise in practice, as well as the potential viola- tions described by Proposition 2, we also propose to tighten the constraints by constraining upper bounds of the auxil- liary costs, instead of the auxilliary costs themselves.
# 6.1. Approximately Solving the CPO Update
By corollaries 1, 2, and 3, for appropriate coefï¬cients αk, βi k the update
[Aâ¢*(s,a)] â ony/ Deca (allm) Thay =argmax E + ott swvdâ¢k ann AGS, @) 1-9 + Biy/Dici(allm) <d; st. Jo,(t,) + E snd ann | 1705.10528#22 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 22 | R=E'(\P(g|X,b) â P(g|>X,)|) = 0.
This expectation is both over P (the AIâs estimate of its unbounded estimator) and over the likely g â G (the likely future; this also depends on the AIâs policy choices).
This works if the impact of the AI is really lost through entropy, such that even P cannot detect it. However, an unbounded agent with full access to the slice G, might be able to always detect the presence of an AI, no matter how low impact it attempts to have.
utility functions, or some near enough approximation. That is, we want U more or less to contain representations of everything humans may really care about. If the AI does not, in expectation, aï¬ect the elements of U that much, then it counts as low impact in the desired sense of the term. Just as in the coarse graining case, weâd want to add a variety of other utilities to U , in case there are vulnerabilities that have not occurred to us. We of course will need to exclude utility functions that care directly about whether X for this approach to work. | 1705.10720#22 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 23 | is guaranteed to produce policies with monotonically non- decreasing returns that satisfy the original constraints. (Ob- serve that the constraint here is on an upper bound for JCi(Ï) by (6).) The off-policy evaluation issue is allevi- ated, because both the objective and constraints involve ex- pectations over state distributions dÏk , which we presume to have samples from. Because the bounds are tight, the problem is always feasible (as long as Ï0 is feasible). How- ever, the penalties on policy divergence are quite steep for discount factors close to 1, so steps taken with this update might be small.
Inspired by trust region methods, we propose CPO, which uses a trust region instead of penalties on policy divergence to enable larger step sizes:
=e 7 © Tk Tei =argmax E [A"(s,a)] on 1 st. Jo, (tr s.t. Jo, (7) + Toyednn avn [Ati(s,a)] <di Vi Der(a| K) <6. (10) | 1705.10528#23 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 23 | 10Of course, some care must still be taken when we decide how to describe this person. For instance, we canât tell now whether there really was somebody called âJack the Ripperâ who was a serial killer and had exactly 100,000 hairs on his head at midnight on 1 January 1888. The more formal development of this approach below will eliminate this issue.
7
In that situation, we can consider the change in P as itâs fed information from G, one bit at a time. When P has all of G, it will detect the presence of the AI, but before that point, it will have doubts. An AI that optimises the world is likely to leave many traces all over G; one that changes little will leave corresponding few choices. For 0 < p < 1, designate by g? a random sampling of G where we have randomly chosen a proportion p of all the data in G. We can designate a new measure of R: R=min {i (FOR) > 10, ore (FEE) = 10h)
Here R is a measure of how much information P is likely to need before detecting the diï¬erence between X and ¬X (âdiï¬erenceâ being somewhat ar- bitrarily deï¬ned as the expected ratio of probabilities, or inverse ratios, being higher than 10).
# 3.5 Comparing distributions abstractly | 1705.10720#23 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 24 | (10) Because this is a trust region method, it inherits the perfor- mance guarantee of Proposition 1. Furthermore, by corol- laries 2 and 3, we have a performance guarantee for ap- proximate satisfaction of constraints: Proposition 2 (CPO Update Worst-Case Constraint Viola- tion). Suppose Ïk, Ïk+1 are related by (10), and that Πθ in (10) is any set of policies with Ïk â Πθ. An upper bound on the Ci-return of Ïk+1 is
For policies with high-dimensional parameter spaces like to solve di- neural networks, (10) can be impractical rectly because of the computational cost. However, for small step sizes δ, the objective and cost constraints are well-approximated by linearizing around Ïk, and the KL- divergence constraint is well-approximated by second or- der expansion (at Ïk = Ï, the KL-divergence and its gra- dient are both zero). Denoting the gradient of the objective as g, the gradient of constraint i as bi, the Hessian of the . KL-divergence as H, and deï¬ning ci = JCi(Ïk) â di, the approximation to (10) is: | 1705.10528#24 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 24 | # 3.5 Comparing distributions abstractly
To avoid some of the issues and choices needed for the options discussed so far, we can attempt to compare the two conditional probability distributions P (â|X) and P (â|¬X) abstractly. The intuition behind this is that if we have a low impact AI or no AI at all, the probability distribution over future worlds is likely spread far and wide across many diï¬erent options, but if there is a powerful unconstrained AI, the future is likely to be highly optimised causing P (â|X) to take on a diï¬erent shape. We can therefore compare the conditional distributions by looking at various descriptors they might have â means, vari- ances, peaks, and so on. Such descriptors depend on the underlying metric on the set of worlds W, and so ideally these descriptors will be very similar for a wide variety of natural metrics we could choose.
The weakness of this approach is that we have an extremely reduced un- derstanding of what is going on in the comparison, so we donât know if the AI can produce a distribution P (W|X) that is formally similar to P (W|¬X), but in practice is a very concentrated, u-optimised world. However, perhaps with a suï¬ciently robust statistical characterisation of the relevant distributions, this wonât be an issue. | 1705.10720#24 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 25 | θk+1 = arg max θ s.t. gT (θ â θk) ci + bT 1 2 i (θ â θk) ⤠0 i = 1, ..., m (θ â θk)T H(θ â θk) ⤠δ.
(11) Because the Fisher information matrix (FIM) H is al- ways positive semi-deï¬nite (and we will assume it to be positive-deï¬nite in what follows), this optimization prob- lem is convex and, when feasible, can be solved efï¬ciently using duality. (We reserve the case where it is not feasi- . ble for the next subsection.) With B = [b1, ..., bm] and c
oy wt (pT 77-1 T,.,To,\ 1,17, r9 max oy (9 Hog arty +7 Sv) +7 y? v=0
(12) . = BT H â1B. This is a convex where r program in m+1 variables; when the number of constraints is small by comparison to the dimension of θ, this is much easier to solve than (11). If λâ, νâ are a solution to the dual, the solution to the primal is | 1705.10528#25 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.