doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1706.02677 | 28 | The halving/doubling algorithm consists of a reduce- scatter collective followed by an allgather. In the ï¬rst step of reduce-scatter, servers communicate in pairs (rank 0 with 1, 2 with 3, etc.), sending and receiving for different halves of their input buffers. For example, rank 0 sends the second half of its buffer to 1 and receives the ï¬rst half of the buffer from 1. A reduction over the received data is performed be- fore proceeding to the next step, where the distance to the destination rank is doubled while the data sent and received is halved. After the reduce-scatter phase is ï¬nished, each server has a portion of the ï¬nal reduced vector. | 1706.02677#28 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 29 | at keeping mean and variance to their original values after âalpha dropoutâ, in order to ensure the self-normalizing property even for âalpha dropoutâ. The affine transformation a(ad + aâ(1 â d)) + b allows to determine parameters a and b such that mean and variance are kept to their values: E(a(xd + a/(1 â d)) +b) = ys and Var(a(xd + a/(1 â d)) +b) =v. In contrast to dropout, a and 6 will depend on 1 and v, however our SNNs converge to activations with zero mean and unit variance. With . = 0 and v = 1, we obtain a = (q +a?q(1â a)â and b=-(qt+a%q(1- a)? ((1 â q)aâ). The parameters a and b only depend on the dropout rate 1 â qand the most negative activation aâ. Empirically, we found that dropout rates 1 â g = 0.05 or 0.10 lead to models with good performance. âAlpha-dropoutâ fits well to scaled exponential linear units by randomly setting activations to the negative saturation value. | 1706.02515#29 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 29 | # 1https://eicu-crd.mit.edu/
AUROC AUPRC real TSTR real TSTR random SpO2 < 95 0.9587 ± 0.0004 0.88 ± 0.01 0.9059 ± 0.0005 0.66 ± 0.02 0.16 HR < 70 0.9908 ± 0.0005 0.96 ± 0.01 0.9855 ± 0.0002 0.90 ± 0.02 0.26 HR > 100 0.9919 ± 0.0002 0.95 ± 0.01 0.9778 ± 0.0002 0.84 ± 0.03 0.18 AUROC AUPRC real TSTR real TSTR random RR < 13 0.9735 ± 0.0001 0.86 ± 0.01 0.9557 ± 0.0002 0.73 ± 0.02 0.26 RR > 20 0.963 ± 0.001 0.84 ± 0.02 0.891 ± 0.001 0.50 ± 0.06 0.1 MAP < 70 0.9717 ± 0.0001 0.875 ± 0.007 0.9653 ± 0.0001 0.82 ± 0.02 0.39 MAP > 110 0.960 ± 0.001 0.87 ± 0.04 0.8629 ± 0.0007 0.42 ± 0.07 0.05 | 1706.02633#29 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 29 | This is followed by the allgather phase, which retraces the communication pattern from the reduce-scatter in re- verse, this time simply concatenating portions of the ï¬nal reduced vector. At each server, the portion of the buffer that was being sent in the reduce-scatter is received in the all- gather, and the portion that was being received is now sent. To support non-power-of-two number of servers, we used the binary blocks algorithm [30]. This is a generalized version of the halving/doubling algorithm where servers are partitioned into power-of-two blocks and two additional communication steps are used, one immediately after the intrablock reduce-scatter and one before the intrablock all- gather. Non-power-of-two cases have some degree of load imbalance compared to power-of-two, though in our runs we did not see signiï¬cant performance degradation.
# 4.2. Software
The allreduce algorithms described are implemented in Gloo4, a library for collective communication. It supports
# 4https://github.com/facebookincubator/gloo
multiple communication contexts, which means no addi- tional synchronization is needed to execute multiple allre- duce instances in parallel. Local reduction and broadcast (described as phases (1) and (3)) are pipelined with inter- server allreduce where possible. | 1706.02677#29 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 30 | 6
Applicability of the central limit theorem and independence assumption. In the derivative of the mapping (Eq. ®)). we used the central limit theorem (CLT) to approximate the network inputs 2 = )0j_, wiz; with a normal distribution. We justified normality because network inputs represent a weighted sum of the inputs x;, where for Deep Learning n is typically large. The Berry-Esseen theorem states that the convergence rate to normality is n~1/? (22). In the classical version of the CLT, the random variables have to be independent and identically distributed, which typically does not hold for neural networks. However, the Lyapunov CLT does not require the variable to be identically distributed anymore. Furthermore, even under weak dependence, sums of random variables converge in distribution to a Gaussian distribution [5].
# Experiments
We compare SNNs to other deep networks at different benchmarks. Hyperparameters such as number of layers (blocks), neurons per layer, learning rate, and dropout rate, are adjusted by grid-search for each dataset on a separate validation set (see Section A4). We compare the following FNN methods:
⢠âMSRAinitâ: FNNs without normalization and with ReLU activations and âMicrosoft weight initializationâ [17]. | 1706.02515#30 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 30 | Table 2: Performance of random forest classiï¬er for eICU tasks when trained with real data and when trained with synthetic data (test set is real), including random prediction baselines. AUPRC stands for area under the precision-recall curve, and AUROC stands for area under ROC curve. Italics denotes those tasks whose performance were optimised in cross-validation.
5.1 TSTR TASKS IN EICU
The data generated in a ICU is complex, so it is challenging for non-medical experts to spot patterns or trends on it. Thus, one plot showing synthetic ICU data would not provide enough information to evaluate its actual similarity to the real data. Therefore, we evaluate the performance of the ICU RCGAN using the TSTR method. | 1706.02633#30 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 30 | Caffe2 supports multi-threaded execution of the compute graph that represents a training iteration. Whenever there is no data dependency between subgraphs, multiple threads can execute those subgraphs in parallel. Applying this to backprop, local gradients can be computed in sequence, without dealing with allreduce or weight updates. This means that during backprop, the set of runnable subgraphs may grow faster than we can execute them. For subgraphs that contain an allreduce run, all servers must choose to exe- cute the same subgraph from the set of runnable subgraphs. Otherwise, we risk distributed deadlock where servers are attempting to execute non-intersecting sets of subgraphs. With allreduce being a collective operation, servers would time out waiting. To ensure correct execution we impose a partial order on these subgraphs. This is implemented using a cyclical control input, where completion of the n-th allre- duce unblocks execution of the (n + c)-th allreduce, with c being the maximum number of concurrent allreduce runs. Note that this number should be chosen to be lower than the number of threads used to execute the full compute graph.
# 4.3. Hardware | 1706.02677#30 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 31 | ⢠âMSRAinitâ: FNNs without normalization and with ReLU activations and âMicrosoft weight initializationâ [17].
âBatchNormâ: FNNs with batch normalization [20]. ⢠âLayerNormâ: FNNs with layer normalization [2]. ⢠âWeightNormâ: FNNs with weight normalization [32]. ⢠âHighwayâ: Highway networks [35]. ⢠âResNetâ: Residual networks [16] adapted to FNNs using residual blocks with 2 or 3 layers
with rectangular or diavolo shape.
⢠âSNNsâ: Self normalizing networks with SELUs with α = α01 and λ = λ01 and the proposed dropout technique and initialization strategy. | 1706.02515#31 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 31 | To perform the TSTR evaluation, we need a supervised task (or tasks) on the data. A relevant question in the ICU is whether or not a patient will become âcriticalâ in the near future - a kind of early warning system. For a model generating dynamic time-series data, this is especially appropriate, as trends in the data are likely most predictive. Based on our four variables (SpO2, HR, RR, MAP) we deï¬ne âcritical thresholdsâ and generate binary labels of whether or not that variable will exceed the threshold in the next hour of the patientâs stay - that is, between hour 4 and 5, since we consider the ï¬rst four hours âobservedâ. The thresholds are shown in the columns of Table 2. There is no upper threshold for SpO2, as it is a percentage with 100% denoting ideal conditions.
As for MNIST, we âsampleâ labels by drawing them from the real data labels, and use these as conditioning inputs for the RCGAN. This ensures the label distribution in the synthetic dataset and the real dataset is the same, respecting the fact that the labels are not independent (a patient is unlikely to simultaneously suffer from high and low blood pressure). | 1706.02633#31 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 31 | # 4.3. Hardware
We used Facebookâs Big Basin [24] GPU servers for our experiments. Each server contains 8 NVIDIA Tesla P100 GPUs that are interconnected with NVIDIA NVLink. For local storage, each server has 3.2TB of NVMe SSDs. the servers have a Mellanox For network connectivity, ConnectX-4 50Gbit Ethernet network card and are con- nected to Wedge100 [1] Ethernet switches.
We have found 50Gbit of network bandwidth sufï¬cient for distributed synchronous SGD for ResNet-50, per the following analysis. ResNet-50 has approximately 25 mil- lion parameters. This means the total size of parameters is 25 · 106 · sizeof(ï¬oat) = 100MB. Backprop for ResNet-50 on a single NVIDIA Tesla P100 GPU takes 120 ms. Given that allreduce requires â¼2à bytes on the network compared to the value it operates on, this leads to a peak bandwidth re- quirement of 200MB/0.125s = 1600MB/s, or 12.8 Gbit/s, not taking into account communication overhead. When we add a smudge factor for network overhead, we reach a peak bandwidth requirement for ResNet-50 of â¼15 Gbit/s. | 1706.02677#31 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 32 | ⢠âSNNsâ: Self normalizing networks with SELUs with α = α01 and λ = λ01 and the proposed dropout technique and initialization strategy.
121 UCI Machine Learning Repository datasets. The benchmark comprises 121 classiï¬cation datasets from the UCI Machine Learning repository [10] from diverse application areas, such as physics, geology, or biology. The size of the datasets ranges between 10 and 130, 000 data points and the number of features from 4 to 250. In abovementioned work [10], there were methodological mistakes [37] which we avoided here. Each compared FNN method was optimized with respect to its architecture and hyperparameters on a validation set that was then removed from the subsequent analysis. The selected hyperparameters served to evaluate the methods in terms of accuracy on the pre-deï¬ned test sets (details on the hyperparameter selection are given in Section A4). The accuracies are reported in the Table A11. We ranked the methods by their accuracy for each prediction task and compared their average ranks. SNNs signiï¬cantly outperform all competing networks in pairwise comparisons (paired Wilcoxon test across datasets) as reported in Table 1 (left panel). | 1706.02515#32 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 32 | Following Algorithm 1, we train the RCGAN for 1000 epochs, saving one version of the dataset every 50 epochs. Afterwards, we evaluate the synthetic data using TSTR. We use cross validation to select the best synthetic dataset based on the classiï¬er performance, but since we assume that it might be also used for unknown tasks, we use only 3 of the 7 tasks of interest to perform this cross validation step (denoted in italics in Table 2). The results of this experiment are presented in Table 2, which compares the performance achieved by a random forest classiï¬er that has been trained to predict the 7 tasks of interest, in one experiment with real data and in a different experiment with the synthetically generated data.
6
IS THE GAN JUST MEMORISING THE TRAINING DATA? | 1706.02633#32 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 32 | As this peak bandwidth requirement only holds during backprop, the network is free to be used for different tasks that are less latency sensitive then aggregation (e.g. reading data or saving network snapshots) during the forward pass.
6
# 5. Main Results and Analysis
Our main result is that we can train ResNet-50 [16] on ImageNet [33] using 256 workers in one hour, while match- ing the accuracy of small minibatch training. Applying the linear scaling rule along with a warmup strategy allows us to seamlessly scale between small and large minibatches (up to 8k images) without tuning additional hyper-parameters or impacting accuracy. In the following subsections we: (1) describe experimental settings, (2) establish the effec- tiveness of large minibatch training, (3) perform a deeper experimental analysis, (4) show our ï¬ndings generalize to object detection/segmentation, and (5) provide timings.
# 5.1. Experimental Settings
The 1000-way ImageNet classiï¬cation task [33] serves as our main experimental benchmark. Models are trained on the â¼1.28 million training images and evaluated by top- 1 error on the 50,000 validation images. | 1706.02677#32 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 33 | We further included 17 machine learning methods representing diverse method groups [10] in the comparison and the grouped the data sets into âsmallâ and âlargeâ data sets (for details see Section A4). On 75 small datasets with less than 1000 data points, random forests and SVMs outperform SNNs and other FNNs. On 46 larger datasets with at least 1000 data points, SNNs show the highest performance followed by SVMs and random forests (see right panel of Table 1, for complete results see Tables A12 and A12). Overall, SNNs have outperformed state of the art machine learning methods on UCI datasets with more than 1,000 data points.
Typically, hyperparameter selection chose SNN architectures that were much deeper than the selected architectures of other FNNs, with an average depth of 10.8 layers, compared to average depths of 6.0 for BatchNorm, 3.8 WeightNorm, 7.0 LayerNorm, 5.9 Highway, and 7.1 for MSRAinit networks. For ResNet, the average number of blocks was 6.35. SNNs with many more than 4 layers often provide the best predictive accuracies across all neural networks.
Drug discovery: The Tox21 challenge dataset. The Tox21 challenge dataset comprises about 12,000 chemical compounds whose twelve toxic effects have to be predicted based on their chemical
7 | 1706.02515#33 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 33 | 6
IS THE GAN JUST MEMORISING THE TRAINING DATA?
One explanation for the TSTR performance in MNIST and eICU could be that the GAN is simply "memorising" the training data and reproducing it. If this were the case, then the (potentially private) data used to train the GAN would be leaked, raising privacy concerns when used on sensitive medical data. It is key that the training data for the model should not be recoverable by an adversary. In addition, while the typical GAN objective incentivises the generator to reproduce training examples, we hope that it does not overï¬t to the training data, and learn an implicit distribution which is peaked at training examples, and negligible elsewhere.
To answer this question we perform three tests - one qualitative, two statistical, outlined in the following subsections. While these evaluations are empirical in nature, we still believe that the proposed and tested privacy evaluation measures can be very useful to quickly check privacy properties of RGAN generated data â but without strong privacy guarantees.
6.1 COMPARING THE DISTRIBUTION OF RECONSTRUCTION ERRORS | 1706.02633#33 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 33 | We use the ResNet-50 [16] variant from [12], noting that the stride-2 convolutions are on 3Ã3 layers instead of on 1Ã1 layers as in [16]. We use Nesterov momentum [29] with m of 0.9 following [12] but note that standard mo- mentum as was used in [16] is equally effective. We use a weight decay λ of 0.0001 and following [16] we do not ap- ply weight decay on the learnable BN coefï¬cients (namely, γ and β in [19]). In order to keep the training objective ï¬xed, which depends on the BN batch size n as described in §2.3, we use n = 32 throughout, regardless of the overall minibatch size. As in [12], we compute the BN statistics using running average (with momentum 0.9). | 1706.02677#33 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 34 | Drug discovery: The Tox21 challenge dataset. The Tox21 challenge dataset comprises about 12,000 chemical compounds whose twelve toxic effects have to be predicted based on their chemical
7
Table 1: Left: Comparison of seven FNNs on 121 UCI tasks. We consider the average rank difference to rank 4, which is the average rank of seven methods with random predictions. The ï¬rst column gives the method, the second the average rank difference, and the last the p-value of a paired Wilcoxon test whether the difference to the best performing method is signiï¬cant. SNNs signiï¬cantly outperform all other methods. Right: Comparison of 24 machine learning methods (ML) on the UCI datasets with more than 1000 data points. The ï¬rst column gives the method, the second the average rank difference to rank 12.5, and the last the p-value of a paired Wilcoxon test whether the difference to the best performing method is signiï¬cant. Methods that were signiï¬cantly worse than the best method are marked with â*â. The full tables can be found in Table A11, Table A12 and Table A13. SNNs outperform all competing methods. | 1706.02515#34 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 34 | 6.1 COMPARING THE DISTRIBUTION OF RECONSTRUCTION ERRORS
To test if the generated samples look "too similar" to the training set, we could generate a large number of samples and calculate the distance to the nearest neighbour (in the training set) to each generated sample. We could compare the distribution of these distances with those comparing the generated samples and a held-out test set. However, to get an accurate estimate of the distances, we may need to generate many samples, and correspondingly calculate many pairwise distances. Instead, we intentionally generate the nearest neighbour to each training (or test) set point, and then compare the distances.
We generate these nearest neighbours by minimising the reconstruction error between target y and the generated point; Lrecon(y)(Z) = 1 â K(G(Z), y) where K is the RBF kernel described in Section 3.1.1, with bandwidth Ï chosen using the median heuristic (Bounliphone et al., 2015). We ï¬nd Z by minimising the error until approximate convergence (when the gradient norm drops below a threshold). | 1706.02633#34 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 34 | All models are trained for 90 epochs regardless of mini- batch sizes. We apply the linear scaling rule from §2.1 and use a learning rate of η = 0.1 · kn 256 that is linear in the mini- batch size kn. With k = 8 workers (GPUs) and n = 32 samples per worker, η = 0.1 as in [16]. We call this num- ber (0.1 · kn 256 ) the reference learning rate, and reduce it by 1/10 at the 30-th, 60-th, and 80-th epoch, similar to [16].
We adopt the initialization of [15] for all convolutional layers. The 1000-way fully-connected layer is initialized by drawing weights from a zero-mean Gaussian with standard deviation of 0.01. We have found that although SGD with a small minibatch is not sensitive to initialization due to BN, this is not the case for a substantially large minibatch. Addi- tionally we require an appropriate warmup strategy to avoid optimization difï¬culties in early training. | 1706.02677#34 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02633 | 35 | We can then ask if we can distinguish the distribution of reconstruction errors for different input data. Speciï¬cally, we ask if we can distinguish the distribution of errors between the training set and the test set. The intuition is that if the model has "memorised" training data, it will achieve identiï¬ably lower reconstruction errors than with the test set. We use the Kolmogorov-Smirnov two-sample test to test if these distributions differ. For the RGAN generating sine waves, the p-value is 0.2 ± 0.1, for smooth signals it is 0.09 ± 0.04, and for the MNIST experiment shown in Figure 2b it is 0.38 ± 0.06. For the MNIST trained with RCGAN (TSTR results in Table 1), the p-value is 0.57 ± 0.18. We conclude that the distribution of reconstruction errors is not signiï¬cantly different between training and test sets in any of these cases, and that the model does not appear to be biased towards reconstructing training set examples.
6.2 INTERPOLATION | 1706.02633#35 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 35 | For BN layers, the learnable scaling coefï¬cient γ is ini- tialized to be 1, except for each residual blockâs last BN where γ is initialized to be 0. Setting γ = 0 in the last BN of each residual block causes the forward/backward signal ini- tially to propagate through the identity shortcut of ResNets, which we found to ease optimization at the start of training. This initialization improves all models but is particularly helpful for large minibatch training as we will show.
We use scale and aspect ratio data augmentation [36] as in [12]. The network input image is a 224Ã224 pixel ran- dom crop from an augmented image or its horizontal ï¬ip. The input image is normalized by the per-color mean and standard deviation, as in [12].
Handling random variation. As models are subject to random variation in training, we compute a modelâs error rate as the median error of the ï¬nal 5 epochs. Moreover, we report the mean and standard deviation (std) of the error from 5 independent runs. This gives us more conï¬dence in our results and also provides a measure of model stability. | 1706.02677#35 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 36 | structure. We used the validation sets of the challenge winners for hyperparameter selection (see Section A4) and the challenge test set for performance comparison. We repeated the whole evaluation procedure 5 times to obtain error bars. The results in terms of average AUC are given in Table 2. In 2015, the challenge organized by the US NIH was won by an ensemble of shallow ReLU FNNs which achieved an AUC of 0.846 [28]. Besides FNNs, this ensemble also contained random forests and SVMs. Single SNNs came close with an AUC of 0.845±0.003. The best performing SNNs have 8 layers, compared to the runner-ups ReLU networks with layer normalization with 2 and 3 layers. Also batchnorm and weightnorm networks, typically perform best with shallow networks of 2 to 4 layers (Table 2). The deeper the networks, the larger the difference in performance between SNNs and other methods (see columns 5â8 of Table 2). The best performing method is an SNN with 8 layers. | 1706.02515#36 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 36 | 6.2 INTERPOLATION
Suppose that the model has overï¬t (the implicit distribution is highly peaked in the region of training examples), and most points in latent space map to (or near) training examples. If we take a smooth path in the latent space, we expect that at each point, the corresponding generated sample will have the appearance of the "closest" (in latent space) training example, with little variation until we reach the attractor basin of another training example, at which point the samples switch appearance.
We test this qualitatively as follows: we sample a pair of training examples (we conï¬rm by eye that they donât look "too similar"), and then "back-project" them into the latent space to ï¬nd the closest corresponding latent point, as described above. We then linearly interpolate between those latent points, and produce samples from the generator at each point. Figure 4 shows an example of this procedure using the "smooth function" dataset. The samples show a clear incremental variation between start and input sequences, contrary to what we would expect if the model had simply memorised the data.
6.3 COMPARING THE GENERATED SAMPLES | 1706.02633#36 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 36 | The random variation of ImageNet models has generally not been reported in previous work (largely due to resource limitations). We emphasize that ignoring random variation may cause unreliable conclusions, especially if results are from a single trial, or the best of many.
Baseline. Under these settings, we establish a ResNet-50 baseline using k = 8 (8 GPUs in one server) and n = 32 images per worker (minibatch size of kn = 256), as in [16]. Our baseline has a top-1 validation error of 23.60% ±0.12. As a reference, ResNet-50 from fb.resnet.torch [12] has 24.01% error, and that of the original ResNet paper [16] has 24.7% under weaker data augmentation.
# 5.2. Optimization or Generalization Issues? | 1706.02677#36 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02633 | 37 | Rather than using a nearest-neighbours approach (as in Section 6.1), we can use the MMD three- sample test (Bounliphone et al., 2015) to compare the full set of generated samples. With X being the generated samples, Y and Z being the test and training set respectively, we ask if the MMD between X and Y is less than the MMD between X and Z. The test is constructed in this way because we expect that if the model has memorised the training data, that the MMD between the synthetic data and the training data will be signiï¬cantly lower than the MMD between the synthetic data and test data. In this case, the hypothesis that MMD(synthetic, test) ⤠MMD(synthetic, train) will be false. We are therefore testing (as in Section 6.1) if our null hypothesis (that the model has not memorised the training data) can be rejected. The average p-values we observed were: for the eICU data in Section 5.1: 0.40 ± 0.05, for MNIST data in Section 4.3: 0.47 ± 0.16, for sine waves: 0.41 ± 0.07, for smooth signals: 0.07 ± | 1706.02633#37 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 37 | # 5.2. Optimization or Generalization Issues?
We establish our main results on large minibatch train- ing by exploring optimization and generalization behaviors. We will demonstrate that with a proper warmup strategy, large minibatch SGD can both match the training curves of small minibatch SGD and also match the validation error. In other words, in our experiments both optimization and generalization of large minibatch training matches that of small minibatch training. Moreover, in §5.4 we will show that these models exhibit good generalization behavior to the object detection/segmentation transfer tasks, matching the transfer quality of small minibatch models. | 1706.02677#37 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 38 | method 2 3 #layers / #blocks 6 4 8 16 32 83.7 ± 0.3 SNN Batchnorm 80.0 ± 0.5 WeightNorm 83.7 ± 0.8 84.3 ± 0.3 LayerNorm 83.3 ± 0.9 Highway 82.7 ± 0.4 MSRAinit 82.2 ± 1.1 ResNet 84.4 ± 0.5 79.8 ± 1.6 82.9 ± 0.8 84.3 ± 0.5 83.0 ± 0.5 81.6 ± 0.9 80.0 ± 2.0 84.2 ± 0.4 77.2 ± 1.1 82.2 ± 0.9 84.0 ± 0.2 82.6 ± 0.9 81.1 ± 1.7 80.5 ± 1.2 83.9 ± 0.5 77.0 ± 1.7 82.5 ± 0.6 82.5 ± 0.8 82.4 ± 0.8 80.6 ± 0.6 81.2 ± 0.7 84.5 ± 0.2 75.0 ± 0.9 81.9 ± 1.2 80.9 ± 1.8 80.3 ± 1.4 80.9 ± 1.1 81.8 ± 0.6 83.5 ± 0.5 73.7 ± 2.0 78.1 ± 1.3 78.7 | 1706.02515#38 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 38 | MNIST data in Section 4.3: 0.47 ± 0.16, for sine waves: 0.41 ± 0.07, for smooth signals: 0.07 ± 0.04, and for the higher-resolution MNIST RGAN experiments in Section 4: 0.59 ± 0.12 (before correction for multiple hypothesis testing). We conclude that we cannot reject the null hypothesis that the MMD between the synthetic set and test set is at most as large as the MMD between the synthetic set and training set, indicating that the synthetic samples do not look more similar to the training set than they do to the test set. | 1706.02633#38 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 38 | For the following results, we use k = 256 and n = 32, which results in a minibatch size kn = 8k (we use â1kâ to denote 1024). As discussed, our baseline has a mini- batch size of kn = 256 and a reference learning rate of η = 0.1. Applying the linear scaling rule gives η = 3.2 as the reference learning rate for our large minibatch runs. We test three warmup strategies as discussed in §2.2: no warmup, constant warmup with η = 0.1 for 5 epochs, and gradual warmup which starts with η = 0.1 and is linearly increased to η = 3.2 over 5 epochs. All models are trained from scratch and all other hyper-parameters are kept ï¬xed. We emphasize that while better results for any particular minibatch size could be obtained by optimizing hyper-parameters for that case; our goal is to match er- rors across minibatch sizes by using a general strategy that avoids hyper-parameter tuning for each minibatch size.
7 | 1706.02677#38 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02633 | 39 | # 7 TRAINING RGANS WITH DIFFERENTIAL PRIVACY
Although the analyses described in Section|6]indicate that the GAN is not preferentially generating training data points, we are conscious that medical data is often highly sensitive, and that privacy breaches are costly. To move towards stronger guarantees of privacy for synthetic medical data, we investigated the use of a differentially private training procedure for the GAN. Differential privacy is concerned with the influence of the presence or absence of individual records in a database. Intuitively, differential privacy places bounds on the probability of obtaining the same result (in our case, an instance of a trained GAN) given a small perturbation to the underlying dataset. If the training procedure guarantees (¢, 5) differential privacy, then given two âadjacentâ datasets (differing in one record) D, Dâ, | 1706.02633#39 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 39 | 7
k n kn η top-1 error (%) baseline (single server) no warmup, Figure 2a constant warmup, Figure 2b gradual warmup, Figure 2c 8 256 256 256 32 32 32 32 256 8k 8k 8k 0.1 3.2 3.2 3.2 23.60 ±0.12 24.84 ±0.37 25.88 ±0.56 23.74 ±0.09
Table 1. Validation error on ImageNet using ResNet-50 (mean and std computed over 5 trials). We compare the small minibatch model (kn=256) with large minibatch models (kn=8k) with vari- ous warmup strategies. Observe that the top-1 validation error for small and large minibatch training (with gradual warmup) is quite close: 23.60% ±0.12 vs. 23.74% ±0.09, respectively.
Training error. Training curves are shown in Figure 2. With no warmup (2a), the training curve for large minibatch of kn = 8k is inferior to training with a small minibatch of kn = 256 across all epochs. A constant warmup strategy (2b) actually degrades results: although the small constant learning rate can decrease error during warmup, the error spikes immediately after and training never fully recovers. | 1706.02677#39 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 40 | Astronomy: Prediction of pulsars in the HTRU2 dataset. Since a decade, machine learning methods have been used to identify pulsars in radio wave signals [27]. Recently, the High Time Resolution Universe Survey (HTRU2) dataset has been released with 1,639 real pulsars and 16,259 spurious signals. Currently, the highest AUC value of a 10-fold cross-validation is 0.976 which has been achieved by Naive Bayes classiï¬ers followed by decision tree C4.5 with 0.949 and SVMs with 0.929. We used eight features constructed by the PulsarFeatureLab as used previously [27]. We assessed the performance of FNNs using 10-fold nested cross-validation, where the hyperparameters were selected in the inner loop on a validation set (for details on the hyperparameter selection see
8
Section A4). Table 3 reports the results in terms of AUC. SNNs outperform all other methods and have pushed the state-of-the-art to an AUC of 0.98. | 1706.02515#40 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 40 | P[M(D) ⬠S] < e& P[M(Dâ) ⬠S] +6 (1) where M(D) is the GAN obtained from training on D, S is any subset of possible outputs of the training procedure (any subset of possible GANs), and the probability P takes into account the randomness in the procedure M(D). Thus, differential privacy requires that the distribution over GANs produced by M must vary âslowlyâ as D varies, where ⬠and 6 bound this âslownessâ. Inspired by a recent preprint (Beaulieu-Jones et al.||2017), we apply the differential private stochastic gradient descent (DP-SGD) algorithm of (Abadi et al.|/2016) to the discriminator (as the generator does not âseeâ the private data directly). For further details on the algorithm (and the above definition of differential privacy), we refer to and (Dwork et al.|{2006). | 1706.02633#40 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 40 | Our main result is that with gradual warmup, large mini- batch training error matches the baseline training curve ob- tained with small minibatches, see Figure 2c. Although the large minibatch curve starts higher due to the low η in the warmup phase, it catches up shortly thereafter. Af- ter about 20 epochs, the small and large minibatch training curves match closely. The comparison between no warmup and gradual warmup suggests that large minibatch sizes are challenged by optimization difï¬culties in early training and if these difï¬culties are addressed, the training error and its curve can match a small minibatch baseline closely. | 1706.02677#40 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 41 | Table 3: Comparison of FNNs and reference methods at HTRU2 in terms of AUC. The ï¬rst, fourth and seventh column give the method, the second, ï¬fth and eight column the AUC averaged over 10 cross-validation folds, and the third and sixth column the p-value of a paired Wilcoxon test of the AUCs against the best performing method across the 10 folds. FNNs achieve better results than Naive Bayes (NB), C4.5, and SVM. SNNs exhibit the best performance and set a new record.
method FNN methods AUC p-value method FNN methods AUC ref. methods p-value method AUC 0.9803 ± 0.010 SNN MSRAinit 0.9791 ± 0.010 WeightNorm 0.9786* ± 0.010 0.9766* ± 0.009 Highway 3.5e-01 2.4e-02 9.8e-03 LayerNorm 0.9762* ± 0.011 BatchNorm 0.9760 ± 0.013 0.9753* ± 0.010 ResNet 1.4e-02 6.5e-02 6.8e-03 NB C4.5 SVM 0.976 0.946 0.929
# Conclusion | 1706.02515#41 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 41 | In practice, DP-SGD operates by clipping per-example gradients and adding noise in batches. This means the signal obtained from any individual example is limited, providing differential privacy. Some privacy budget is âspentâ every time the training procedure calculates gradients for the discriminator, which enables us to evaluate the effective values of ⬠and 6 throughout training. We use the moments accountant method from to track this privacy spending. Finding hyperparameters which yield both acceptable privacy and realistic GAN samples proved challenging. We focused on the MNIST and eICU tasks with RCGAN, using the TSTR evaluation. | 1706.02633#41 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 41 | Validation error. Table 1 shows the validation error for the three warmup strategies. The no-warmup variant has â¼1.2% higher validation error than the baseline which is likely caused by the â¼2.1% increase in training error (Fig- ure 2a), rather than overï¬tting or other causes for poor gen- eralization. This argument is further supported by our grad- ual warmup experiment. The gradual warmup variant has a validation error within 0.14% of the baseline (noting that std of these estimates is â¼0.1%). Given that the ï¬nal train- ing errors (Figure 2c) match nicely in this case, it shows that if the optimization issues are addressed, there is no apparent generalization degradation observed using large minibatch training, even if the minibatch size goes from 256 to 8k.
Finally, Figure 4 shows both the training and valida- tion curves for the large minibatch training with gradual warmup. As can be seen, validation error starts to match the baseline closely after the second learning rate drop; ac- tually, the validation curves can match earlier if BN statis- tics are recomputed prior to evaluating the error instead of using the running average (see also caption in Figure 4).
(a) no warmup (b) constant warmup (c) gradual warmup | 1706.02677#41 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 42 | # Conclusion
We have introduced self-normalizing neural networks for which we have proved that neuron ac- tivations are pushed towards zero mean and unit variance when propagated through the network. Additionally, for activations not close to unit variance, we have proved an upper and lower bound on the variance mapping. Consequently, SNNs do not face vanishing and exploding gradient prob- lems. Therefore, SNNs work well for architectures with many layers, allowed us to introduce a novel regularization scheme, and learn very robustly. On 121 UCI benchmark datasets, SNNs have outperformed other FNNs with and without normalization techniques, such as batch, layer, and weight normalization, or specialized architectures, such as Highway or Residual networks. SNNs also yielded the best results on drug discovery and astronomy tasks. The best performing SNN architectures are typically very deep in contrast to other FNNs.
# Acknowledgments
This work was supported by IWT research grant IWT150865 (Exaptation), H2020 project grant 671555 (ExCAPE), grant IWT135122 (ChemBioBridge), Zalando SE with Research Agreement 01/2016, Audi.JKU Deep Learning Center, Audi Electronic Venture GmbH, and the NVIDIA Corporation.
# References
The references are provided in Section A7.
# Appendix
# Contents
# A1 Background | 1706.02515#42 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 42 | For MNIST, we clipped gradients to 0.05 and added Gaussian noise with mean zero and standard deviation 0.05 x 2. For ¢ = 1 and < 1.8 x 10~%, we achieved an accuracy of 0.75+0.03. Sacrificing more privacy, with « = 2 and 6 < 2.5 x 10~4, the accuracy is 0.7740.03. These results are far below the performance reported by the non-private GAN (Table[Ip, highlighting the compounded difficulty of generating a realistic dataset while maintaining privacy. For comparison, in they report an accuracy of 0.95 for training an MNIST classifier (on the full task) on a real dataset in a differentially private manner. (Please note, however, that our GAN model had to solve the more challenging task of modeling digits as a time series.) | 1706.02633#42 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02633 | 43 | For eICU, the results are shown in Table B] For this case, we clipped gradients to 0.1 and added noise with standard deviation 0.1 x 2. In surprising contrast to our findings on MNIST, we observe that performance on the eICU tasks remains high with differentially private training, even for a stricter privacy setting (¢ = 0.5 and 5 < 9.8 x 107%). Visual assessment of samples generated by the differentially-private GAN indicate that while it is prone to producing less-realistic sequences, the mistakes it introduces appear to be unimportant for the tasks we consider. In particular, the DP-GAN produces more extreme-valued sequences, but as the tasks are to predict extreme values, it may be that the most salient part of the sequence is preserved. The possibility to introduce privacy-preserving noise which nonetheless allows for the training of downstream models suggests interesting directions of research in the intersection of privacy and GANs.
# 8 CONCLUSION | 1706.02633#43 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 43 | training error % training error % training error % 100 90 100 90 100 90 kn=256, n= 0.1, 23.60%40.12 kn=128, n= 0.05 23.49840.12 kn=256, = 0.1, 23.60 0.12 kn=512, n= 0.2, 23.48%40.09 kn=256, n= 0.1, 23.60%40.12 kn= 1k, n= 0.4, 23.53%40.08 20 40 60 80 20 40 60 80 20 40 60 80 kn=256, n= 0.1, 23.60%40.12 kn= 2k, n= 0.8, 23.49840.11 kn=256, n= 0.1, 23.60%40.12 kn= 4k, n= 1.6, 23.56840.12 kn=256, n= 0.1, 23.60%40.12 kn= 8k, n= 3.2, 23.74%40.09 20 40 60 80 20 40 60 80 20 40 60 80 kn=256, n= 0.1, 23.60%40.12 kn=256, n= 0.1, 23.60%40.12 kn=256, n= 0.1, 23.60%40.12 kn=16k, n= | 1706.02677#43 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 44 | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A3.4.1 Lemmata for prooï¬ng Theorem 1 (part 1): Jacobian norm smaller than one A3.4.2 Lemmata for prooï¬ng Theorem 1 (part 2): Mapping within domain . . . . A3.4.3 Lemmata for prooï¬ng Theorem 2: The variance is contracting . . . . . . . A3.4.4 Lemmata for prooï¬ng Theorem 3: The variance is expanding . . . . . . . A3.4.5 Computer-assisted proof details for main Lemma 12 in Section A3.4.1. A3.4.6 Intermediate Lemmata and Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 1706.02515#44 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 44 | # 8 CONCLUSION
We have described, trained and evaluated a recurrent GAN architecture for generating real-valued sequential data, which we call RGAN. We have additionally developed a conditional variant (RCGAN) to generate synthetic datasets, consisting of real-valued time-series data with associated labels. As this task poses new challenges, we have presented novel solutions to deal with evaluation and questions of privacy. By generating labelled training data - by conditioning on the labels and generating the corresponding samples, we can evaluate the quality of the model using the âTSTR techniqueâ, where we train a model on the synthetic data, and evaluate it on a real, held-out test set. We have demonstrated this approach using âserialisedâ multivariate MNIST, and on a dataset of real ICU | 1706.02633#44 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02633 | 45 | AUROC AUPRC TSTR (DP) TSTR (DP) random SpO2 < 95 0.859 ± 0.004 0.582 ± 0.008 0.16 HR < 70 0.86 ± 0.01 0.77 ± 0.03 0.27 HR > 100 0.90 ± 0.01 0.75 ± 0.03 0.16 AUROC AUPRC TSTR (DP) TSTR (DP) random RR < 13 0.86 ± 0.01 0.72 ± 0.02 0.26 RR > 20 0.87 ± 0.01 0.48 ± 0.03 0.09 MAP < 70 0.78 ± 0.01 0.705 ± 0.005 0.39 MAP > 110 0.83 ± 0.06 0.26 ± 0.06 0.05
Table 3: Performance of random forest classifier trained on synthetic data generated by differentially private GAN, tested on real data. Compare with Table 2] The epoch from which data is generated was selected using a validation set, considering performance on a subset of the tasks (SpO2 < 95, HR > 100, and RR < 13, denoted in italics). In each replicate, the GAN was trained with (e, 6) differential privacy for ⬠= 0.5 and 6 < 9.8 x 107°. | 1706.02633#45 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 45 | Figure 3. Training error vs. minibatch size. Training error curves for the 256 minibatch baseline and larger minibatches using gradual warmup and the linear scaling rule. Note how the training curves closely match the baseline (aside from the warmup period) up through 8k minibatches. Validation error (mean±std of 5 runs) is shown in the legend, along with minibatch size kn and reference learning rate η.
8
100 80 x 4 5 60 © 40 20 0 20 40 60 80 epochs
Figure 4. Training and validation curves for large minibatch SGD with gradual warmup vs. small minibatch SGD. Both sets of curves match closely after training for sufï¬cient epochs. We note that the BN statistics (for inference only) are computed us- ing running average, which is updated less frequently with a large minibatch and thus is noisier in early training (this explains the larger variation of the validation error in early epochs).
# 5.3. Analysis Experiments | 1706.02677#45 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 46 | # A3 Proofs of the Theorems
# A4 Additional information on experiments
A5 Other ï¬xed points
A6 Bounds determined by numerical methods
# A7 References
# List of ï¬gures
# List of tables
# Brief index
This appendix is organized as follows: the ï¬rst section sets the background, deï¬nitions, and for- mulations. The main theorems are presented in the next section. The following section is devoted to the proofs of these theorems. The next section reports additional results and details on the per- formed computational experiments, such as hyperparameter selection. The last section shows that our theoretical bounds can be conï¬rmed by numerical methods as a sanity check. | 1706.02515#46 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 46 | patients, where models trained on the synthetic dataset achieved performance at times comparable to that of the real data. In domains such as medicine, where privacy concerns hinder the sharing of data, this implies that with reï¬nement of these techniques, models could be developed on synthetic data that are still valuable for real tasks. This could enable the development of synthetic âbenchmarkingâ datasets for medicine (or other sensitive domains), of the kind which have enabled great progress in other areas. We have additionally illustrated that such a synthetic dataset does not pose a major privacy concern or constitute a data leak for the original sensitive training data, and that for stricter privacy guarantees, differential privacy can be used in training the RCGAN with some loss to performance.
# REFERENCES | 1706.02633#46 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 46 | # 5.3. Analysis Experiments
Minibatch size vs. error. Figure 1 (page 1) shows top- 1 validation error for models trained with minibatch sizes ranging from of 64 to 65536 (64k). For all models we used the linear scaling rule and set the reference learning rate as η = 0.1 · kn 256 . For models with kn > 256, we used the gradual warmup strategy always starting with η = 0.1 and increasing linearly to the reference learning rate after 5 epochs. Figure 1 illustrates that validation error remains stable across a broad range of minibatch sizes, from 64 to 8k, after which it begins to increase. Beyond 64k training diverges when using the linear learning rate scaling rule.5 | 1706.02677#46 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 47 | The proof of theorem 1 is based on the Banachâs ï¬xed point theorem for which we require (1) a contraction mapping, which is proved in Subsection A3.4.1 and (2) that the mapping stays within its domain, which is proved in Subsection A3.4.2 For part (1), the proof relies on the main Lemma 12, which is a computer-assisted proof, and can be found in Subsection A3.4.1. The validity of the computer-assisted proof is shown in Subsection A3.4.5 by error analysis and the precision of the functionsâ implementation. The last Subsection A3.4.6 compiles various lemmata with intermediate results that support the proofs of the main lemmata and theorems.
10
100
102
# A1 Background | 1706.02515#47 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 47 | # REFERENCES
MartÃn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorï¬ow.org. | 1706.02633#47 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 47 | Training curves for various minibatch sizes. Each of the nine plots in Figure 3 shows the top-1 training error curve for the 256 minibatch baseline (orange) and a second curve corresponding to different size minibatch (blue). Valida- tion errors are shown in the plot legends. As minibatch size increases, all training curves show some divergence from the baseline at the start of training. However, in the cases where the ï¬nal validation error closely matches the base- line (kn ⤠8k), the training curves also closely match after the initial epochs. When the validation errors do not match (kn ⥠16k), there is a noticeable gap in the training curves for all epochs. This suggests that when comparing a new setting, the training curves can be used as a reliable proxy for success well before training ï¬nishes.
Alternative learning rate rules. Table 2a shows results for multiple learning rates. For small minibatches (kn = 256),
5We note that because of the availability of hardware, we simulated dis- tributed training of very large minibatches (â¥12k) on a single server by us- ing multiple gradient accumulation steps between SGD updates. We have thoroughly veriï¬ed that gradient accumulation on a single server yields equivalent results relative to distributed training.
9 | 1706.02677#47 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 48 | 10
100
102
# A1 Background
We consider a neural network with activation function f and two consecutive layers that are connected by weight matrix W . Since samples that serve as input to the neural network are chosen according to a distribution, the activations x in the lower layer, the network inputs z = W x, and activations y = f (z) in the higher layer are all random variables. We assume that all units xi in the lower layer have mean activation µ := E(xi) and variance of the activation ν := Var(xi) and a unit y in the higher layer has mean activation ˵ := E(y) and variance Ëν := Var(y). Here E(.) denotes the expectation and Var(.) the variance of a random variable. For activation of unit y, we have net input z = wT x and the scaled exponential linear unit (SELU) activation y = selu(z), with
# x
x ifx>0 aeââa ifx<0~- selu(z) = A { (7) | 1706.02515#48 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 48 | MartÃn Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308â318. ACM, 2016.
Grigory Antipov, Moez Baccouche, and Jean-Luc Dugelay. Face aging with conditional generative adversarial networks. arXiv preprint arXiv:1702.01983, 2017.
Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. In International Conference on Machine Learning, pp. 1120â1128, 2016.
Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. 26 January 2017.
Brett K. Beaulieu-Jones, Zhiwei Steven Wu, Chris Williams, and Casey S. Greene. Privacy-preserving generative deep neural networks support clinical data sharing. bioRxiv, 2017. doi: 10.1101/159756. URL https://www.biorxiv.org/content/early/2017/07/05/159756. | 1706.02633#48 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 48 | 9
100 kn=256, n= 0.1, 23.60%+0.12 kn=256, n= 0.2, 23.68%3+0.09 training error % 0 20 40 60 80 epochs
Figure 5. Training curves for small minibatches with different learning rates η. As expected, changing η results in curves that do not match. This is in contrast to changing batch-size (and linearly scaling η), which results in curves that do match, e.g. see Figure 3.
η = 0.1 gives best error but slightly smaller or larger η also work well. When applying the linear scaling rule with a minibatch of 8k images, the optimum error is also achieved with η = 0.1 · 32, showing the successful application of the linear scaling rule. However, in this case results are more sensitive to changing η. In practice we suggest to use a minibatch size that is not close to the breaking point. | 1706.02677#48 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 49 | # x
x ifx>0 aeââa ifx<0~- selu(z) = A { (7)
For n units x;, 1 < i < n in the lower layer and the weight vector w © Râ, we define n times the mean by w := )>;__, w; and 7 times the second moment by 7 := )>;_, w?. We define a mapping g from mean ju and variance v of one layer to the mean / and variance v in the next layer:
g: (Hv) + (4,7). (8)
For neural networks with scaled exponential linear units, the mean is of the activations in the next layer computed according to
0 oo j= | Aa(exp(z) â 1) pcauss (23 ww, /vT)dz + [ Azpaauss(z; ww, VvT)dz, (9) 5 0 â0o
and the second moment of the activations in the next layer is computed according to
é = [ da? (exp(z) â 1)? paauss (2; Ww, VuT)dz + [ 72? DGauss (2; ww, VVT)dz. (10) âoo
Therefore, the expressions ˵ and Ëν have the following form: | 1706.02515#49 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 49 | Wacha Bounliphone, Eugene Belilovsky, Matthew B Blaschko, Ioannis Antonoglou, and Arthur Gretton. A test of relative similarity for model selection in generative models. 14 November 2015.
Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN: In- terpretable representation learning by information maximizing generative adversarial nets. 12 June 2016.
Edward Choi, Siddharth Biswal, Bradley Malin, Jon Duke, Walter F Stewart, and Jimeng Sun. Generating multi-label discrete electronic health records using generative adversarial networks. 19 March 2017.
Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In Eurocrypt, volume 4004, pp. 486â503. Springer, 2006.
Otto Fabius and Joost R van Amersfoort. Variational recurrent auto-encoders. arXiv preprint arXiv:1412.6581, 2014.
Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Winter semester, 2014(5):2, 2014. | 1706.02633#49 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 49 | Figure 5 shows the training curves of a 256 minibatch using η = 0.1 or 0.2. It shows that changing the learning rate η in general changes the overall shapes of the train- ing curves, even if the ï¬nal error is similar. Contrasting this result with the success of the linear scaling rule (that can match both the ï¬nal error and the training curves when minibatch sizes change) may reveal some underlying invari- ance maintained between small and large minibatches.
We also show two alternative strategies: keeping η ï¬xed at 0.1 or using 0.1· 32 according to the square root scaling rule that was justiï¬ed theoretically in [21] on grounds that it scales η by the inverse amount of the reduction in the gradi- ent estimatorâs standard deviation. For fair comparisons we also use gradual warmup for 0.1 · 32. Both policies work poorly in practice as the results show. | 1706.02677#49 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 50 | Therefore, the expressions ˵ and Ëν have the following form:
fi(u,w,V,T, A, a) â (te + js)erte (Eâ) n an)
aeclât"F erfe (â*) + v2 Tre + 2) D(p,w,V,7,A,0) = E,W, V,7,,0) â (ji(H,w, 1,7, A,a))? (12) Eusw.nr da) = 5% ( (us? 7) Gi ( na ) 1) (13) a (2008 erfe (â*) + e2HH+Y7) orfe (â*) + erfe (5) + 2tn) re
We solve equations Eq. 4 and Eq. 5 for ï¬xed points ˵ = µ and Ëν = ν. For a normalized weight vector with Ï = 0 and Ï = 1 and the ï¬xed point (µ, ν) = (0, 1), we can solve equations Eq. 4 and Eq. 5 for α and λ. We denote the solutions to ï¬xed point (µ, ν) = (0, 1) by α01 and λ01. | 1706.02515#50 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 50 | Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. 10 June 2014.
Arthur Gretton, Karsten M Borgwardt, Malte Rasch, Bernhard Schölkopf, and Alex J Smola. A kernel method for the two-sample-problem. In Advances in neural information processing systems, pp. 513â520, 2007.
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein GANs. 31 March 2017.
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Stephanie L Hyland and Gunnar Rätsch. Learning unitary operators with help from u (n). In AAAI 2017, 2017.
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. | 1706.02633#50 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 50 | Batch Normalization γ initialization. Table 2b controls for the impact of the new BN γ initialization introduced in §5.1. We show results for minibatch sizes 256 and 8k with the standard BN initialization (γ = 1 for all BN layers) and with our initialization (γ = 0 for the ï¬nal BN layer of each residual block). The results show improved per- formance with γ = 0 for both minibatch sizes, and the improvement is slightly larger for the 8k minibatch size. This behavior also suggests that large minibatches are more easily affected by optimization difï¬culties. We expect that improved optimization and initialization methods will help push the boundary of large minibatch training.
ResNet-101. Results for ResNet-101 [16] are shown in Ta- ble 2c. Training ResNet-101 with a batch-size of kn = 8k | 1706.02677#50 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02633 | 51 | Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectiï¬ed linear units. arXiv preprint arXiv:1504.00941, 2015.
Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016.
Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. Adversarial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547, 2017.
Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. 10 February 2015.
Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. | 1706.02633#51 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 51 | ResNet-101. Results for ResNet-101 [16] are shown in Ta- ble 2c. Training ResNet-101 with a batch-size of kn = 8k
kn 256 256 256 8k 8k 8k 8k 8k η 0.05 0.10 0.20 0.05 · 32 0.10 · 32 0.20 · 32 0.10 â top-1 error (%) 23.92 ±0.10 23.60 ±0.12 23.68 ±0.09 24.27 ±0.08 23.74 ±0.09 24.05 ±0.18 41.67 ±0.10 26.22 ±0.03 0.10 · 32
(a) Comparison of learning rate scaling rules. A reference learning rate of η = 0.1 works best for kn = 256 (23.68% error). The linear scal- ing rule suggests η = 0.1 · 32 when kn = 8k, which again gives best performance (23.74% error). Other ways of scaling η give worse results.
kn 256 256 8k 8k η 0.1 0.1 3.2 3.2 γ-init 1.0 0.0 1.0 0.0 top-1 error (%) 23.84 ±0.18 23.60 ±0.12 24.11 ±0.07 23.74 ±0.09 | 1706.02677#51 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 52 | Since we focus on the ï¬xed point (µ, ν) = (0, 1), we assume throughout the analysis that α = α01 and λ = λ01. We consider the functions ˵(µ, Ï, ν, Ï, λ01, α01), Ëν(µ, Ï, ν, Ï, λ01, α01), and Ëξ(µ, Ï, ν, Ï, λ01, α01) on the domain ⦠= {(µ, Ï, ν, Ï ) | µ â [µmin, µmax] = [â0.1, 0.1], Ï â [Ïmin, Ïmax] = [â0.1, 0.1], ν â [νmin, νmax] = [0.8, 1.5], Ï â [Ïmin, Ïmax] = [0.95, 1.1]}.
Figure 2 visualizes the mapping g for Ï = 0 and Ï = 1 and α01 and λ01 at few pre-selected points. It can be seen that (0, 1) is an attracting ï¬xed point of the mapping g.
# A2 Theorems | 1706.02515#52 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 52 | Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
Olof Mogren. C-RNN-GAN: Continuous recurrent neural networks with adversarial training. 29 November 2016.
Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
Nicolas Papernot, MartÃn Abadi, Ãlfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Semi- supervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv:1610.05755, 2016.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825â2830, 2011. | 1706.02633#52 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 52 | (b) Batch normalization γ initialization. Initializing γ = 0 in the last BN layer of each residual block improves results for both small and large minibatches. This initialization leads to better optimization behavior which has a larger positive impact when training with large minibatches.
model type ResNet-101 ResNet-101 kn 256 8k η 0.1 3.2 top-1 error (%) 22.08 ±0.06 22.36 ±0.09
(c) The linear scaling rule applied to ResNet-101. The difference in error is about 0.3% between small and large minibatch training.
Table 2. ImageNet classiï¬cation experiments. Unless noted all experiments use ResNet-50 and are averaged over 5 trials.
and a linearly scaled η = 3.2 results in an error of 22.36% vs. the kn = 256 baseline which achieves 22.08% with η = 0.1. In other words, ResNet-101 trained with mini- batch 8k has a small 0.28% increase in error vs. the baseline. It is likely that the minibatch size of 8k lies on the edge of the useful minibatch training regime for ResNet-101, simi- larly to ResNet-50 (see Figure 1). | 1706.02677#52 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02633 | 53 | Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In Proceedings of The 33rd International Conference on Machine Learning, volume 3, 2016.
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. 10 June 2016.
Dougal J Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, and Arthur Gretton. Generative models and model criticism via optimized maximum mean discrepancy. 14 November 2016.
Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. 5 November 2015.
Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger Grosse. On the quantitative analysis of Decoder-Based generative models. 14 November 2016. | 1706.02633#53 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 53 | The training time of ResNet-101 is 92.5 minutes in our implementation using 256 Tesla P100 GPUs and a mini- batch size of 8k. We believe this is a compelling result if the speed-accuracy tradeoff of ResNet-101 is preferred.
ImageNet-5k. Observing the sharp increase in validation error between minibatch sizes of 8k and 16k on ImageNet- 1k (Figure 1), a natural question is if the position of this âelbowâ in the error curve is a function of dataset infor- mation content. To investigate this question, we adopt the ImageNet-5k dataset suggested by Xie et al. [39] that extends ImageNet-1k to 6.8 million images (roughly 5à larger) by adding 4k additional categories from ImageNet- 22k [33]. We evaluate the 1k-way classiï¬cation error on the original ImageNet-1k validation set as in [39].
The minibatch size vs. validation error curve for ImageNet-5k is shown in Figure 6. Qualitatively, the curve
10
nD o o ES a 36 a 3S ImageNet top-1 validation error Py 3 32k 64k ny a too) 512 1k 2k 4k 8k mini-batch size 16k | 1706.02677#53 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 54 | Theorem 1 shows that the mapping g deï¬ned by Eq. (4) and Eq. (5) exhibits a stable and attracting ï¬xed point close to zero mean and unit variance. Theorem 1 establishes the self-normalizing property of self-normalizing neural networks (SNNs). The stable and attracting ï¬xed point leads to robust learning through many layers. Theorem 1 (Stable and Attracting Fixed Points). We assume α = α01 and λ = λ01. We restrict the range of the variables to the domain µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.95, 1.1]. For Ï = 0 and Ï = 1, the mapping Eq. (4) and Eq. (5) has the stable ï¬xed point (µ, ν) = (0, 1). For other Ï and Ï the mapping Eq. (4) and Eq. (5) has a stable and attracting ï¬xed point depending on (Ï, Ï ) in the (µ, ν)-domain: µ â [â0.03106, | 1706.02515#54 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 54 | Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger Grosse. On the quantitative analysis of Decoder-Based generative models. 14 November 2016.
Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. Improving neural machine translation with conditional sequence generative adversarial nets. arXiv preprint arXiv:1703.04887, 2017.
Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. SeqGAN: Sequence generative adversarial nets with policy gradient. 18 September 2016. | 1706.02633#54 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 54 | Figure 6. ImageNet-5k top-1 validation error vs. minibatch size with a ï¬xed 90 epoch training schedule. The curve is qualitatively similar to results on ImageNet-1k (Figure 1) showing that a 5à increase in training data does not lead to a signiï¬cant change in the maximum effective minibatch size.
ImageNet pre-training COCO kn 256 512 1k 2k 4k 8k 16k η 0.1 0.2 0.4 0.8 1.6 3.2 6.4 top-1 error (%) 23.60 ±0.12 23.48 ±0.09 23.53 ±0.08 23.49 ±0.11 23.56 ±0.12 23.74 ±0.09 24.79 ±0.27 box AP (%) 35.9 ±0.1 35.8 ±0.1 35.9 ±0.2 35.9 ±0.1 35.8 ±0.1 35.8 ±0.1 35.1 ±0.3 mask AP (%) 33.9 ±0.1 33.8 ±0.2 33.9 ±0.2 33.9 ±0.1 33.8 ±0.1 33.9 ±0.2 33.2 ±0.3 | 1706.02677#54 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02677 | 55 | (a) Transfer learning of large minibatch pre-training to Mask R-CNN. Box and mask AP (on COCO minival) are nearly identical for ResNet- 50 models pre-trained with minibatches from 256 to 8k examples. With a minibatch pre-training size of 16k both ImageNet validation error and COCO AP deteriorate. This indicates that as long as ImageNet error is matched, large minibatches do not degrade transfer learning performance.
box AP (%) mask AP (%)
1 2 4 8 2.5 5.0 10.0 20.0 35.7 35.7 35.7 35.6 33.6 33.7 33.5 33.6
(b) Linear learning rate scaling applied to Mask R-CNN. Using the sin- gle ResNet-50 model from [16] (thus no std is reported), we train Mask R-CNN using using from 1 to 8 GPUs following the linear learning rate scaling rule. Box and mask AP are nearly identical across all conï¬gurations showing the successful generalization of the rule beyond classiï¬cation.
# Table 3. Object detection on COCO with Mask R-CNN [14]. | 1706.02677#55 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 56 | # A2.2 Theorem 2: Decreasing Variance from Above
The next Theorem{2]states that the variance of unit activations does not explode through consecutive layers of self-normalizing networks. Even more, a large variance of unit activations decreases when propagated through the network. In particular this ensures that exploding gradients will never be observed. In contrast to the domain in previous subsection, in which v ⬠[0.8, 1.5], we now consider a domain in which the variance of the inputs is higher v ⬠[3, 16] and even the range of the mean is increased jz ⬠[â1, 1]. We denote this new domain with the symbol Q** to indicate that the variance lies above the variance of the original domain 2. In Q+*, we can show that the variance 7 in the next layer is always smaller then the original variance . Concretely, this theorem states that: Theorem 2 (Decreasing v). For \ = Aoi, @ = Ag, and the domain Q*+: -1 <w< 1,-0.1 < w<013<v < 16, and0.8 <7 < 1.25 we have for the mapping of the variance U([1,w,V,T, , 2) given in Eq. | 1706.02515#56 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02677 | 56 | # Table 3. Object detection on COCO with Mask R-CNN [14].
is very similar to the ImageNet-1k curve, showing that for practitioners it is unlikely that even a 5Ã increase in dataset size will automatically lead to a meaningful increase in use- able minibatch size. Quantitatively, using an 8k minibatch increases the validation error by 0.26% from 25.83% for a 256 minibatch to 26.09%. An understanding of the precise relationship between generalization error, minibatch size, and dataset information content is open for future work.
# 5.4. Generalization to Detection and Segmentation
A low error rate on ImageNet is not typically an end goal. Instead, the utility of ImageNet training lies in learn° o Cy > _ 8 0.28 , 2 < E S 0.26 4s FA 8 g 3 20.24 22 3 & c oO ® £ 0.22 1 = 0.2 0.5 256 512 1k 2k 4k 8k 11k mini-batch size | 1706.02677#56 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 57 | Ëν(µ, Ï, ν, Ï, λ01, α01) < ν . (15)
The variance decreases in [3, 16] and all ï¬xed points (µ, ν) of mapping Eq. (5) and Eq. (4) have ν < 3.
# A2.3 Theorem 3: Increasing Variance from Below
The next Theorem 3 states that the variance of unit activations does not vanish through consecutive layers of self-normalizing networks. Even more, a small variance of unit activations increases when
12 | 1706.02515#57 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02677 | 57 | Figure 7. Distributed synchronous SGD timing. Time per itera- tion (seconds) and time per ImageNet epoch (minutes) for training with different minibatch sizes. The baseline (kn = 256) uses 8 GPUs in a single server , while all other training runs distribute training over (kn/256) server. With 352 GPUs (44 servers) our implementation completes one pass over all â¼1.28 million Ima- geNet training images in about 30 seconds.
ing good features that transfer, or generalize well, to re- lated tasks. A question of key importance is if the features learned with large minibatches generalize as well as the fea- tures learned with small minibatches? this, we adopt
the object detection and in- stance segmentation tasks on COCO [27] as these advanced perception tasks beneï¬t substantially from ImageNet pre- training [10]. We use the recently developed Mask R-CNN [14] system that is capable of learning to detect and segment object instances. We follow all of the hyper-parameter set- tings used in [14] and only change the ResNet-50 model used to initialize Mask R-CNN training. We train Mask R- CNN on the COCO trainval35k split and report results on the 5k image minival split used in [14]. | 1706.02677#57 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 58 | propagated through the network. In particular this ensures that vanishing gradients will never be observed. In contrast to the first domain, in which v ⬠(0.8, 1.5], we now consider two domains Q and Q3 in which the variance of the inputs is lower 0.05 < v < 0.16 and 0.05 < v < 0.24, and even the parameter 7 is different 0.9 < tT < 1.25 to the original 2. We denote this new domain with the symbol (2; to indicate that the variance lies below the variance of the original domain Q. In Q7 and (3 , we can show that the variance v in the next layer is always larger then the original variance v, which means that the variance does not vanish through consecutive layers of self-normalizing networks. Concretely, this theorem states that: Theorem 3 (Increasing v). We consider ) = oi, @ = agi and the two domains QY = {(u,w,v,T) | â01 < w < 0.1,-0.1 < w < 0.1,0.05 < v < 0.16,0.8 < rT < 1.25} and OF = {(j,w,v,7) | â0.1 | 1706.02515#58 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02677 | 58 | It is interesting to note that the concept of minibatch size in Mask R-CNN is different from the classiï¬cation setting. As an extension of the image-centric Fast/Faster R-CNN [9, 31], Mask R-CNN exhibits different minibatch sizes for different layers: the network backbone uses two images (per GPU), but each image contributes 512 Regions- of-Interest for computing classiï¬cation (multinomial cross- entropy), bounding-box regression (smooth-L1/Huber), and pixel-wise mask (28 à 28 binomial cross-entropy) losses. This diverse set of minibatch sizes and loss functions pro- vides a good test case to the robustness of our approach. | 1706.02677#58 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02677 | 59 | Transfer learning from large minibatch pre-training. To test how large minibatch pre-training effects Mask R- CNN, we take ResNet-50 models trained on ImageNet-1k with 256 to 16k minibatches and use them to initialize Mask R-CNN training. For each minibatch size we pre-train 5 models and then train Mask R-CNN using all 5 models on COCO (35 models total). We report the mean box and mask APs, averaged over the 5 trials, in Table 3a. The results show that as long as ImageNet validation error is kept low, which is true up to 8k batch size, generalization to object de11
32k | |â*âideal 3 â actual 5 16k is} oO 2 8k 3 > 4k £ 2k 8 16 32 64 128 256 352 # GPUs
Figure 8. Distributed synchronous SGD throughput. The small overhead when moving from a single server with 8 GPUs to multi- server distributed training (Figure 7, blue curve) results in linear throughput scaling that is marginally below ideal scaling (â¼90% efï¬ciency). Most of the allreduce communication time is hid- den by pipelining allreduce operations with gradient computation. Moreover, this is achieved with commodity Ethernet hardware. | 1706.02677#59 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 60 | The mapping of the variance Ëν(µ, Ï, ν, Ï, λ, α) given in Eq. (5) increases
Ëν(µ, Ï, ν, Ï, λ01, α01) > ν (16)
in both Qy and Qy. All fixed points (j1,v) of mapping Eq. (5) and Eq. (4) ensure for 0.8 < rT that vb > 0.16 and for 0.9 < T that V > 0.24. Consequently, the variance mapping Eq. (5) and Eq. ensures a lower bound on the variance v.
# A3 Proofs of the Theorems
# A3.1 Proof of Theorem 1
We have to show that the mapping g deï¬ned by Eq. (4) and Eq. (5) has a stable and attracting ï¬xed point close to (0, 1). To proof this statement and Theorem 1, we apply the Banach ï¬xed point theorem which requires (1) that g is a contraction mapping and (2) that g does not map outside the functionâs domain, concretely: | 1706.02515#60 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02677 | 60 | tection matches the AP of the small minibatch baseline. We emphasize that we observed no generalization issues when transferring across datasets (from ImageNet to COCO) and across tasks (from classiï¬cation to detection/segmentation) using models trained with large minibatches.
Linear scaling rule applied to Mask R-CNN. We also show evidence of the generality of the linear scaling rule us- ing Mask R-CNN. In fact, this rule was already used with- out explicit discussion in [16] and was applied effectively as the default Mask R-CNN training scheme when using 8 GPUs. Table 3b provides experimental results showing that when training with 1, 2, 4, or 8 GPUs the linear learning rate rule results in constant box and mask AP. For these experi- ments, we initialize Mask R-CNN from the released MSRA ResNet-50 model, as was done in [14].
# 5.5. Run Time | 1706.02677#60 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 61 | Theorem 4 (Banach Fixed Point Theorem). Let (X, d) be a non-empty complete metric space with a contraction mapping f : X â X. Then f has a unique ï¬xed-point xf â X with f (xf ) = xf . Every xf . sequence xn = f (xnâ1) with starting element x0 â X converges to the ï¬xed point: xn âââââ nââ
Contraction mappings are functions that map two points such that their distance is decreasing:
Definition 2 (Contraction mapping). A function f : X â X ona metric space X with distance d is a contraction mapping, if there is a0 < 5 < 1, such that for all points wu and v in X: d(f(u), f(v)) < dd(u, v).
To show that g is a contraction mapping in Q with distance ||.||2, we use the Mean Value Theorem for uvEed
IIg(t4) â g()ll2 < M ||u â vII2, (17) | 1706.02515#61 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02677 | 61 | # 5.5. Run Time
Figure 7 shows two visualizations of the run time char- acteristics of our system. The blue curve is the time per iteration as minibatch size varies from 256 to 11264 (11k). Notably this curve is relatively ï¬at and the time per itera- tion increases only 12% while scaling the minibatch size by 44Ã. Visualized another way, the orange curve shows the approximately linear decrease in time per epoch from over 16 minutes to just 30 seconds. Run time performance can also be viewed in terms of throughput (images / second), as shown in Figure 8. Relative to a perfectly efï¬cient extrapo- lation of the 8 GPU baseline, our implementation achieves â¼90% scaling efï¬ciency.
Acknowledgements. We would like to thank Leon Bottou for helpful discussions on theoretical background, Jerry Pan and Christian Puhrsch for discussions on efï¬cient data loading, An- drew Dye for help with debugging distributed training, and Kevin Lee, Brian Dodds, Jia Ning, Koh Yew Thoon, Micah Harris, and John Volk for Big Basin and hardware support.
# References
[1] J. Bagga, H. Morsy, for and Z. Yao. 100. Opening https: 6-pack and Wedge | 1706.02677#61 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 62 | IIg(t4) â g()ll2 < M ||u â vII2, (17)
in which M is an upper bound on the spectral norm the Jacobian H of g. The spectral norm is given by the largest singular value of the Jacobian of g. If the largest singular value of the Jacobian is smaller than 1, the mapping g of the mean and variance to the mean and variance in the next layer is contracting. We show that the largest singular value is smaller than 1 by evaluating the function for the singular value S(µ, Ï, ν, Ï, λ, α) on a grid. Then we use the Mean Value Theorem to bound the deviation of the function S between grid points. To this end, we have to bound the gradient of S with respect to (µ, Ï, ν, Ï ). If all function values plus gradient times the deltas (differences between grid points and evaluated points) is still smaller than 1, then we have proofed that the function is below 1 (Lemma 12). To show that the mapping does not map outside the functionâs domain, we derive bounds on the expressions for the mean and the variance (Lemma 13). Section A3.4.1 and Section A3.4.2 are concerned with the contraction mapping and the image of the function domain of g, respectively. | 1706.02515#62 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02677 | 62 | # References
[1] J. Bagga, H. Morsy, for and Z. Yao. 100. Opening https: 6-pack and Wedge
designs //code.facebook.com/posts/203733993317833/ opening-designs-for-6-pack-and-wedge-100, 2016. [2] M. Barnett, L. Shuler, R. van De Geijn, S. Gupta, D. G. Payne, and J. Watts. Interprocessor collective communica- tion library (intercom). In Scalable High-Performance Com- puting Conference, 1994.
[3] L. Bottou. Curiously fast convergence of some stochastic gradient descent algorithms. Unpublished open problem of- fered to the attendance of the SLDS 2009 conference, 2009. [4] L. Bottou, F. E. Curtis, and J. Nocedal. Opt. methods for large-scale machine learning. arXiv:1606.04838, 2016. [5] J. Chen, X. Pan, R. Monga, S. Bengio, and R. Joze- Revisiting Distributed Synchronous SGD.
fowicz. arXiv:1604.00981, 2016. | 1706.02677#62 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02677 | 63 | fowicz. arXiv:1604.00981, 2016.
[6] K. Chen and Q. Huo. Scalable training of deep learning ma- chines by incremental block training with intra-block par- allel optimization and blockwise model-update ï¬ltering. In ICASSP, 2016. [7] R. Collobert,
J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language pro- cessing (almost) from scratch. JMLR, 2011.
[8] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional acti- vation feature for generic visual recognition. In ICML, 2014.
[9] R. Girshick. Fast R-CNN. In ICCV, 2015. [10] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. | 1706.02677#63 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 64 | 13
Theorem (Stable and Attracting Fixed Points). We assume α = α01 and λ = λ01. We restrict the range of the variables to the domain µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.95, 1.1]. For Ï = 0 and Ï = 1, the mapping Eq. (4) and Eq. (5) has the stable ï¬xed point (µ, ν) = (0, 1). For other Ï and Ï the mapping Eq. (4) and Eq. (5) has a stable and attracting ï¬xed point depending on (Ï, Ï ) in the (µ, ν)-domain: µ â [â0.03106, 0.06773] and ν â [0.80009, 1.48617]. All points within the (µ, ν)-domain converge when iteratively applying the mapping Eq. (4) and Eq. (5) to this ï¬xed point. | 1706.02515#64 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02677 | 64 | [11] W. Gropp, E. Lusk, and A. Skjellum. Using MPI: Portable Parallel Programming with the Message-Passing Interface. MIT Press, Cambridge, MA, 1999.
[12] S. Gross and M. Wilber. Training and investigating Resid- https://github.com/facebook/fb. ual Nets. resnet.torch, 2016.
[13] M. G¨urb¨uzbalaban, A. Ozdaglar, and P. Parrilo. Why stochastic gradient descent. random reshufï¬ing beats arXiv:1510.08560, 2015.
[14] K. He, G. Gkioxari, P. Doll´ar, and R. Girshick. Mask R- CNN. arXiv:1703.06870, 2017.
[15] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In ICCV, 2015.
[16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. | 1706.02677#64 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 65 | Proof. According to Lemma 12 the mapping g (Eq. (4) and Eq. (5)) is a contraction mapping in the given domain, that is, it has a Lipschitz constant smaller than one. We showed that (µ, ν) = (0, 1) is a ï¬xed point of the mapping for (Ï, Ï ) = (0, 1).
The domain is compact (bounded and closed), therefore it is a complete metric space. We further have to make sure the mapping g does not map outside its domain â¦. According to Lemma 13, the mapping maps into the domain µ â [â0.03106, 0.06773] and ν â [0.80009, 1.48617].
Now we can apply the Banach ï¬xed point theorem given in Theorem 4 from which the statement of the theorem follows.
# A3.2 Proof of Theorem 2
First we recall Theorem[2} Theorem (Decreasing v). For \ = \o1, @ = a1 and the domainOt++: -1<w<1,-0.1<w< 0.1, 3 <v < 16, and 0.8 < T < 1.25 we have for the mapping of the variance 0(u,w,V,T, , a) given in Eq. | 1706.02515#65 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02677 | 65 | [16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
[17] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012.
[18] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Quantized neural networks: Training neu- ral networks with low precision weights and activations. arXiv:1510.08560, 2016.
[19] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
12
[20] N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang. On large-batch training for deep learning: Gen- eralization gap and sharp minima. ICLR, 2017. | 1706.02677#65 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 66 | Ëν(µ, Ï, ν, Ï, λ01, α01) < ν . (18)
The variance decreases in [3, 16] and all ï¬xed points (µ, ν) of mapping Eq. (5) and Eq. (4) have ν < 3.
Proof. We start to consider an even larger domain â-1 < wp < 1,-0.1 <w <0.1,15<V < 16, and 0.8 < 7 < 1.25. We prove facts for this domain and later restrict to3 <v<16,i.e. Q++, We consider the function g of the difference between the second moment ⬠in the next layer and the variance v in the lower layer:
g(M,W,U,T, Ao1, 01) = E(p,w, V,T,Ao1,401) â VY. (19) If we can show that g(j1,w,v,7, 01,01) < 0 for all (u,w,v,7) ⬠Q**, then we would obtain our desired result 7 < ⬠< v. The derivative with respect to v is according to Theorem|16] | 1706.02515#66 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02677 | 66 | [21] A. Krizhevsky. One weird trick for parallelizing convolu- tional neural networks. arXiv:1404.5997, 2014.
[22] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classi- ï¬cation with deep convolutional neural nets. In NIPS, 2012. [23] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural compu- tation, 1989.
[24] K. Lee. AI 1835166200089399/introducing-big-basin, 2017.
[25] M. Li. Scaling Distributed Machine Learning with System and Algorithm Co-design. PhD thesis, Carnegie Mellon Uni- versity, 2017.
[26] T.-Y. Lin, P. Doll´ar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In CVPR, 2017. | 1706.02677#66 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 67 | â âν g(µ, Ï, ν, Ï, λ01, α01) = â âν Ëξ(µ, Ï, ν, Ï, λ01, α01) â 1 < 0 . (20)
Therefore g is strictly monotonically decreasing in ν. Since Ëξ is a function in Î½Ï (these variables only appear as this product), we have for x = νÏ
â âν Ëξ = â âx Ëξ âx âν = â âx Ëξ Ï (21)
and
â âÏ Ëξ = â âx Ëξ âx âÏ = â âx Ëξ ν . (22)
Therefore we have according to Theorem 16:
â âÏ Ëξ(µ, Ï, ν, Ï, λ01, α01) = ν Ï â âν Ëξ(µ, Ï, ν, Ï, λ01, α01) > 0 . (23)
Therefore | 1706.02515#67 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02677 | 67 | [27] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: Com- mon objects in context. In ECCV. 2014.
[28] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [29] Y. Nesterov. Introductory lectures on convex optimization: A
basic course. Springer, 2004.
[30] R. Rabenseifner. Optimization of collective reduction oper- ations. In ICCS. Springer, 2004.
[31] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: To- wards real-time object detection with region proposal net- works. In NIPS, 2015. | 1706.02677#67 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 68 | Therefore
â âÏ g(µ, Ï, ν, Ï, λ01, α01) = â âÏ Ëξ(µ, Ï, ν, Ï, λ01, α01) > 0 . (24)
14
Consequently, g is strictly monotonically increasing in Ï . Now we consider the derivative with respect to µ and Ï. We start with â âµ
oz on E(y1,w,V,7, \, 0) (25) p wet + ur Mw ( a? (âe4#â*+*) erfe (â*) + (eet et (Cares . + 2Qvr [uw a7 ePHet2V7 orfc (â= + juw ( 2 â erfc +2 rem |). V2VuT V2VvT
(25)
We consider the sub-function
2 2 ive (« (oe) erfc (â*) _ (a) erfc (â*)) : (26) VUT VUT
We set x = Î½Ï and y = ÂµÏ and obtain
x 2 x aty \? Qn + y2ve- a 2(o( We ) erfc (<4) â ( %) erfc (4) : (27) | 1706.02515#68 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02677 | 68 | [32] H. Robbins and S. Monro. A stochastic approximation method. The annals of mathematical statistics, 1951. [33] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
[34] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014. [35] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [36] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015. | 1706.02677#68 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 69 | x 2 x aty \? Qn + y2ve- a 2(o( We ) erfc (<4) â ( %) erfc (4) : (27)
The derivative to this sub-function with respect to y is
Ea y a+y)? a+ a? (= ae (2a + y) erfe (234) eS (x ) erfe (#4) ) _ 08) x (Gx 2 (ety)? wty 302 JE e- ae (Qa-+y) erfe(2 2ety) _e oe (ety) erfe( 34) > 0.
# x
The inequality follows from Lemma 24, which states that zez2 erfc(z) is monotonically increasing in z. Therefore the sub-function is increasing in y. The derivative to this sub-function with respect to x is
1 2 { @ztu)? 2 2 Qe+y a oe dae â fi DJne Vina (« ( x y ) er c Vaya (x+y)? â8° ye rspate( £22) â valet 1 2 (29) | 1706.02515#69 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02677 | 69 | [37] R. Thakur, R. Rabenseifner, and W. Gropp. Optimization of collective comm. operations in MPICH. IJHPCA, 2005. [38] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al. Googleâs neural machine translation system: Bridg- ing the gap between human and machine translation. arXiv:1609.08144, 2016.
[39] S. Xie, R. Girshick, P. Doll´ar, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In CVPR, 2017.
[40] W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stol- cke, D. Yu, and G. Zweig. The Microsoft 2016 Conversa- tional Speech Recognition System. arXiv:1609.03528, 2016. [41] M. D. Zeiler and R. Fergus. Visualizing and understanding
convolutional neural networks. In ECCV, 2014. | 1706.02677#69 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 72 | a2 D (2aây)(2a+y)2 (wy) (@+y)2 ) x(a? _ 1) Vi (2e-+u+ (2a+y)?+2(2a+y) +1) Vi (atu Vatu)? +0.878-2(0+y) +0.878") D V2 /r03/2 Vi (Qetyt/Qrtytl)?) Vi (atyty/(o+y+0.878)?) a2 D V2Vre8/? (2aây)(Qat+y)2 (xy) (a@+y)2 2 Vr@@r+y) +1) - es) x(a? 1) a 3 a2 ( (2aây)(2a-+y)2 (w@=y)(@+y)2 ) _¢ (0? _ 1) V2/rx3/2 (2(a+y)+0. STA oe y)Qa+y)2 Cop eepGbery ey?) 7 ae 2(2e+y)+ aT (2(a + y) + 0.878) 2723/2 x (a? â 1) (2(2% + y) + 1)(2(x + y) + 0.878)) 2 | 1706.02515#72 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 73 | 0.878) 2723/2 x (a? â 1) (2(2% + y) + 1)(2(x + y) + 0.878)) 2 (2a + y) + 1)(2(a + y) + 0.878) V2,./r03/? 8a3 4+ 120?y + 4.145692? + dary? â 6.76009xy â 1.580232 + 0.683154y? (2(2a + y) + 1)(2(a + y) + 0.878) /2./r23/? 8x3 â 0.1 - 12x? + 4.145692? + 4 - (0.0)?x â 6.76009 - 0.1a â 1.580232 + 0.683154 - (0.0)? (2(2a + y) + 1)(2(a + y) + 0.878) /2/ra3/2 8a? + 2.94569 â 2.25624 (2(2x y) + 1) (Q(a + y) + 0.878) V2/a Vz 8(« â 0.377966) (a + 0.746178) (2(2x y) +1)(2(@ + y) 0.878) | 1706.02515#73 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02515 | 75 | We explain this chain of inequalities:
⢠First inequality: We applied Lemma 22 two times.
â
â 2
⢠Equalities factor out x and reformulate.
⢠Second inequality part 1: we applied
0 < 2y =â (2x + y)2 + 4x + 1 < (2x + y)2 + 2(2x + y) + 1 = (2x + y + 1)2 .
Second inequality part 2: we show that for a = 0 (V 960-1697 - 13) following holds: 82 _ (a? + 2a(x+y)) > 0. We have 2-8* â (a? + 2a(x+y)) = 8 â 2a > O and by Sz - (a? + 2a(x 4 y)) â2a < 0. Therefore the minimum is at border for minimal x and maximal y:
â 2 8:12 _ (2 ( /960+1697 |, a2+or+(2 (mt 1697 4 T 10 us 10 7 (32)
Thus
> a? +2a(x+y). (33) fora = qh (/ eee â 13) > 0.878. 8a T
⢠Equalities only solve square root and factor out the resulting terms (2(2x + y) + 1) and (2(x + y) + 0.878). | 1706.02515#75 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.