id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1610.02357#2
Xception: Deep Learning with Depthwise Separable Convolutions
# 1.1. The Inception hypothesis # 1. Introduction Convolutional neural networks have emerged as the mas- ter algorithm in computer vision in recent years, and de- veloping recipes for designing them has been a subject of considerable attention. The history of convolutional neural network design started with LeNet-style models [10], which were simple stacks of convolutions for feature extraction and max-pooling operations for spatial sub-sampling. In 2012, these ideas were reï¬ ned into the AlexNet architec- ture [9], where convolution operations were being repeated multiple times in-between max-pooling operations, allowing the network to learn richer features at every spatial scale. What followed was a trend to make this style of network increasingly deeper, mostly driven by the yearly ILSVRC competition; ï¬ rst with Zeiler and Fergus in 2013 [25] and then with the VGG architecture in 2014 [18]. A convolution layer attempts to learn ï¬ lters in a 3D space, with 2 spatial dimensions (width and height) and a chan- nel dimension; thus a single convolution kernel is tasked with simultaneously mapping cross-channel correlations and spatial correlations. This idea behind the Inception module is to make this process easier and more efï¬ cient by explicitly factoring it into a series of operations that would independently look at cross-channel correlations and at spatial correlations. More precisely, the typical Inception module ï¬ rst looks at cross- channel correlations via a set of 1x1 convolutions, mapping the input data into 3 or 4 separate spaces that are smaller than the original input space, and then maps all correlations in these smaller 3D spaces, via regular 3x3 or 5x5 convolutions. This is illustrated in ï¬ gure 1. In effect, the fundamental hy- pothesis behind Inception is that cross-channel correlations and spatial correlations are sufï¬ ciently decoupled that it is preferable not to map them jointly 1. At this point a new style of network emerged, the Incep- tion architecture, introduced by Szegedy et al. in 2014 [20] 1A variant of the process is to independently look at width-wise corre- Consider a simpliï¬
1610.02357#1
1610.02357#3
1610.02357
[ "1608.04337" ]
1610.02357#3
Xception: Deep Learning with Depthwise Separable Convolutions
ed version of an Inception module that only uses one size of convolution (e.g. 3x3) and does not include an average pooling tower (ï¬ gure 2). This Incep- tion module can be reformulated as a large 1x1 convolution followed by spatial convolutions that would operate on non- overlapping segments of the output channels (ï¬ gure 3). This observation naturally raises the question: what is the ef- fect of the number of segments in the partition (and their size)? Would it be reasonable to make a much stronger hypothesis than the Inception hypothesis, and assume that cross-channel correlations and spatial correlations can be mapped completely separately? Figure 1. A canonical Inception module (Inception V3). Concat 3x3 conv I 3x3 conv 3x3 conv 3x3 conv I I I 1x1 conv 1x1 conv Avg Pool 1x1 conv Input Figure 2. A simpliï¬ ed Inception module. 3x3 conv 3x3 conv 3x3 conv Input # 1.2. The continuum between convolutions and sep- arable convolutions An â extremeâ
1610.02357#2
1610.02357#4
1610.02357
[ "1608.04337" ]
1610.02357#4
Xception: Deep Learning with Depthwise Separable Convolutions
version of an Inception module, based on this stronger hypothesis, would ï¬ rst use a 1x1 convolution to map cross-channel correlations, and would then separately map the spatial correlations of every output channel. This is shown in ï¬ gure 4. We remark that this extreme form of an Inception module is almost identical to a depthwise sepa- rable convolution, an operation that has been used in neural lations and height-wise correlations. This is implemented by some of the modules found in Inception V3, which alternate 7x1 and 1x7 convolutions. The use of such spatially separable convolutions has a long history in im- age processing and has been used in some convolutional neural network implementations since at least 2012 (possibly earlier).
1610.02357#3
1610.02357#5
1610.02357
[ "1608.04337" ]
1610.02357#5
Xception: Deep Learning with Depthwise Separable Convolutions
Figure 3. A strictly equivalent reformulation of the simpliï¬ ed In- ception module. 3x3 conv Input Output channels Figure 4. An â extremeâ version of our Inception module, with one spatial convolution per output channel of the 1x1 convolution. Ae ee ee ee Output channels 1x1 conv Input network design as early as 2014 [15] and has become more popular since its inclusion in the TensorFlow framework [1] in 2016. A depthwise separable convolution, commonly called â separable convolutionâ in deep learning frameworks such as TensorFlow and Keras, consists in a depthwise convolution, i.e. a spatial convolution performed independently over each channel of an input, followed by a pointwise convolution, i.e. a 1x1 convolution, projecting the channels output by the depthwise convolution onto a new channel space. This is not to be confused with a spatially separable convolution, which is also commonly called â separable convolutionâ in the image processing community. Two minor differences between and â extremeâ
1610.02357#4
1610.02357#6
1610.02357
[ "1608.04337" ]
1610.02357#6
Xception: Deep Learning with Depthwise Separable Convolutions
version of an Inception module and a depthwise separable convolution would be: â ¢ The order of the operations: depthwise separable con- volutions as usually implemented (e.g. in TensorFlow) perform ï¬ rst channel-wise spatial convolution and then perform 1x1 convolution, whereas Inception performs the 1x1 convolution ï¬ rst. â ¢ The presence or absence of a non-linearity after the ï¬ rst operation. In Inception, both operations are fol- lowed by a ReLU non-linearity, however depthwise separable convolutions are usually implemented with- out non-linearities.
1610.02357#5
1610.02357#7
1610.02357
[ "1608.04337" ]
1610.02357#7
Xception: Deep Learning with Depthwise Separable Convolutions
We argue that the ï¬ rst difference is unimportant, in par- ticular because these operations are meant to be used in a stacked setting. The second difference might matter, and we investigate it in the experimental section (in particular see ï¬ gure 10). We also note that other intermediate formulations of In- ception modules that lie in between regular Inception mod- ules and depthwise separable convolutions are also possible: in effect, there is a discrete spectrum between regular convo- lutions and depthwise separable convolutions, parametrized by the number of independent channel-space segments used for performing spatial convolutions. A regular convolution (preceded by a 1x1 convolution), at one extreme of this spectrum, corresponds to the single-segment case; a depth- wise separable convolution corresponds to the other extreme where there is one segment per channel; Inception modules lie in between, dividing a few hundreds of channels into 3 or 4 segments. The properties of such intermediate modules appear not to have been explored yet. Having made these observations, we suggest that it may be possible to improve upon the Inception family of archi- tectures by replacing Inception modules with depthwise sep- arable convolutions, i.e. by building models that would be stacks of depthwise separable convolutions. This is made practical by the efï¬ cient depthwise convolution implementa- tion available in TensorFlow. In what follows, we present a convolutional neural network architecture based on this idea, with a similar number of parameters as Inception V3, and we evaluate its performance against Inception V3 on two large-scale image classiï¬
1610.02357#6
1610.02357#8
1610.02357
[ "1608.04337" ]
1610.02357#8
Xception: Deep Learning with Depthwise Separable Convolutions
cation task. # 2. Prior work The present work relies heavily on prior efforts in the following areas: â ¢ Convolutional neural networks [10, 9, 25], in particular the VGG-16 architecture [18], which is schematically similar to our proposed architecture in a few respects. â ¢ The Inception architecture family of convolutional neu- ral networks [20, 7, 21, 19], which ï¬ rst demonstrated the advantages of factoring convolutions into multiple branches operating successively on channels and then on space.
1610.02357#7
1610.02357#9
1610.02357
[ "1608.04337" ]
1610.02357#9
Xception: Deep Learning with Depthwise Separable Convolutions
â ¢ Depthwise separable convolutions, which our proposed architecture is entirely based upon. While the use of spa- tially separable convolutions in neural networks has a long history, going back to at least 2012 [12] (but likely even earlier), the depthwise version is more recent. Lau- rent Sifre developed depthwise separable convolutions during an internship at Google Brain in 2013, and used them in AlexNet to obtain small gains in accuracy and large gains in convergence speed, as well as a signiï¬ cant reduction in model size. An overview of his work was ï¬ rst made public in a presentation at ICLR 2014 [23]. Detailed experimental results are reported in Sifreâ s the- sis, section 6.2 [15]. This initial work on depthwise sep- arable convolutions was inspired by prior research from Sifre and Mallat on transformation-invariant scattering [16, 15]. Later, a depthwise separable convolution was used as the ï¬ rst layer of Inception V1 and Inception V2 [20, 7]. Within Google, Andrew Howard [6] has introduced efï¬ cient mobile models called MobileNets using depthwise separable convolutions. Jin et al. in 2014 [8] and Wang et al. in 2016 [24] also did related work aiming at reducing the size and computational cost of convolutional neural networks using separable convolutions.
1610.02357#8
1610.02357#10
1610.02357
[ "1608.04337" ]
1610.02357#10
Xception: Deep Learning with Depthwise Separable Convolutions
Additionally, our work is only possible due to the inclusion of an efï¬ cient implementation of depthwise separable convolutions in the TensorFlow framework [1]. â ¢ Residual connections, introduced by He et al. in [4], which our proposed architecture uses extensively. # 3. The Xception architecture We propose a convolutional neural network architecture based entirely on depthwise separable convolution layers. In effect, we make the following hypothesis: that the map- ping of cross-channels correlations and spatial correlations in the feature maps of convolutional neural networks can be entirely decoupled. Because this hypothesis is a stronger ver- sion of the hypothesis underlying the Inception architecture, we name our proposed architecture Xception, which stands for â Extreme Inceptionâ . A complete description of the speciï¬ cations of the net- work is given in ï¬ gure 5.
1610.02357#9
1610.02357#11
1610.02357
[ "1608.04337" ]
1610.02357#11
Xception: Deep Learning with Depthwise Separable Convolutions
The Xception architecture has 36 convolutional layers forming the feature extraction base of the network. In our experimental evaluation we will ex- clusively investigate image classiï¬ cation and therefore our convolutional base will be followed by a logistic regression layer. Optionally one may insert fully-connected layers be- fore the logistic regression layer, which is explored in the experimental evaluation section (in particular, see ï¬ gures 7 and 8). The 36 convolutional layers are structured into 14 modules, all of which have linear residual connections around them, except for the ï¬ rst and last modules. In short, the Xception architecture is a linear stack of depthwise separable convolution layers with residual con- nections. This makes the architecture very easy to deï¬ ne and modify; it takes only 30 to 40 lines of code using a high- level library such as Keras [2] or TensorFlow-Slim [17], not unlike an architecture such as VGG-16 [18], but rather un- like architectures such as Inception V2 or V3 which are far more complex to deï¬ ne.
1610.02357#10
1610.02357#12
1610.02357
[ "1608.04337" ]
1610.02357#12
Xception: Deep Learning with Depthwise Separable Convolutions
An open-source implementation of Xception using Keras and TensorFlow is provided as part of the Keras Applications module2, under the MIT license. # 4. Experimental evaluation We choose to compare Xception to the Inception V3 ar- chitecture, due to their similarity of scale: Xception and Inception V3 have nearly the same number of parameters (table 3), and thus any performance gap could not be at- tributed to a difference in network capacity. We conduct our comparison on two image classiï¬ cation tasks: one is the well-known 1000-class single-label classiï¬ cation task on the ImageNet dataset [14], and the other is a 17,000-class multi-label classiï¬ cation task on the large-scale JFT dataset. # 4.1. The JFT dataset JFT is an internal Google dataset for large-scale image classiï¬ cation dataset, ï¬ rst introduced by Hinton et al. in [5], which comprises over 350 million high-resolution images annotated with labels from a set of 17,000 classes. To eval- uate the performance of a model trained on JFT, we use an auxiliary dataset, FastEval14k. FastEval14k is a dataset of 14,000 images with dense annotations from about 6,000 classes (36.5 labels per im- age on average). On this dataset we evaluate performance using Mean Average Precision for top 100 predictions (MAP@100), and we weight the contribution of each class to MAP@100 with a score estimating how common (and therefore important) the class is among social media images. This evaluation procedure is meant to capture performance on frequently occurring labels from social media, which is crucial for production models at Google.
1610.02357#11
1610.02357#13
1610.02357
[ "1608.04337" ]
1610.02357#13
Xception: Deep Learning with Depthwise Separable Convolutions
# 4.2. Optimization conï¬ guration A different optimization conï¬ guration was used for Ima- geNet and JFT: â ¢ On ImageNet: â Optimizer: SGD â Momentum: 0.9 â Initial learning rate: 0.045 â Learning rate decay: decay of rate 0.94 every 2 epochs â ¢ On JFT: â Optimizer: RMSprop [22] â Momentum: 0.9 â Initial learning rate: 0.001 2https://keras.io/applications/#xception
1610.02357#12
1610.02357#14
1610.02357
[ "1608.04337" ]
1610.02357#14
Xception: Deep Learning with Depthwise Separable Convolutions
â Learning rate decay: decay of rate 0.9 every 3,000,000 samples For both datasets, the same exact same optimization con- ï¬ guration was used for both Xception and Inception V3. Note that this conï¬ guration was tuned for best performance with Inception V3; we did not attempt to tune optimization hyperparameters for Xception. Since the networks have dif- ferent training proï¬ les (ï¬ gure 6), this may be suboptimal, es- pecially on the ImageNet dataset, on which the optimization conï¬ guration used had been carefully tuned for Inception V3. Additionally, all models were evaluated using Polyak averaging [13] at inference time. # 4.3. Regularization conï¬ guration â ¢ Weight decay: The Inception V3 model uses a weight decay (L2 regularization) rate of 4e â 5, which has been carefully tuned for performance on ImageNet. We found this rate to be quite suboptimal for Xception and instead settled for 1e â 5. We did not perform an extensive search for the optimal weight decay rate. The same weight decay rates were used both for the ImageNet experiments and the JFT experiments.
1610.02357#13
1610.02357#15
1610.02357
[ "1608.04337" ]
1610.02357#15
Xception: Deep Learning with Depthwise Separable Convolutions
â ¢ Dropout: For the ImageNet experiments, both models include a dropout layer of rate 0.5 before the logistic regression layer. For the JFT experiments, no dropout was included due to the large size of the dataset which made overï¬ tting unlikely in any reasonable amount of time. â ¢ Auxiliary loss tower: The Inception V3 architecture may optionally include an auxiliary tower which back- propagates the classiï¬ cation loss earlier in the network, serving as an additional regularization mechanism. For simplicity, we choose not to include this auxiliary tower in any of our models. # 4.4. Training infrastructure All networks were implemented using the TensorFlow framework [1] and trained on 60 NVIDIA K80 GPUs each. For the ImageNet experiments, we used data parallelism with synchronous gradient descent to achieve the best classi- ï¬ cation performance, while for JFT we used asynchronous gradient descent so as to speed up training. The ImageNet experiments took approximately 3 days each, while the JFT experiments took over one month each. The JFT models were not trained to full convergence, which would have taken over three month per experiment.
1610.02357#14
1610.02357#16
1610.02357
[ "1608.04337" ]
1610.02357#16
Xception: Deep Learning with Depthwise Separable Convolutions
Figure 5. The Xception architecture: the data ï¬ rst goes through the entry ï¬ ow, then through the middle ï¬ ow which is repeated eight times, and ï¬ nally through the exit ï¬ ow. Note that all Convolution and SeparableConvolution layers are followed by batch normalization [7] (not included in the diagram). All SeparableConvolution layers use a depth multiplier of 1 (no depth expansion). Entry flow 299x299x3 images Conv 32, 3x3, stride=2x2 ReLU ReLU Middle flow 19x19x728 feature maps Exit flow 19x19x728 feature maps ReLU Conv 64, 3x3 SeparableConv 728, 3x3 SeparableConv 728, 3x3 ReLU SeparableConv 128, 3x3 I ReLU SeparableConv 128, 3x3 Conv 1x1 stride=2x2 MaxPooling 3x3, stride=2x2 SeparableConv 256, 3x3 Conv 1x1 stride=2x2 SeparableConv 256, 3x3 MaxPooling 3x3, stride=2x2 ReLU SeparableConv 728, 3x3 I Conv 1x1 ReLU stride=2x2 SeparableConv 728, 3x3 MaxPooling 3x3, stride=2x2 19x19x728 feature maps SeparableConv 728, 3x3 SeparableConv 728, 3x3 19x19x728 feature maps Repeated 8 times Conv 1x1 ReLU SeparableConv 1024, 3x3 stride=2x2 MaxPooling 3x3, stride=2x2 SeparableConv 1536, 3x3 SeparableConv 2048, 3x3 ReLU GlobalAveragePooling 2048-dimensional vectors Optional fully-connected Layer(s) Logistic regression
1610.02357#15
1610.02357#17
1610.02357
[ "1608.04337" ]
1610.02357#17
Xception: Deep Learning with Depthwise Separable Convolutions
# 4.5. Comparison with Inception V3 152 [4]. # 4.5.1 Classiï¬ cation performance All evaluations were run with a single crop of the inputs images and a single model. ImageNet results are reported on the validation set rather than the test set (i.e. on the non-blacklisted images from the validation set of ILSVRC 2012). JFT results are reported after 30 million iterations (one month of training) rather than after full convergence. Results are provided in table 1 and table 2, as well as ï¬ gure 6, ï¬ gure 7, ï¬ gure 8.
1610.02357#16
1610.02357#18
1610.02357
[ "1608.04337" ]
1610.02357#18
Xception: Deep Learning with Depthwise Separable Convolutions
On JFT, we tested both versions of our networks that did not include any fully-connected layers, and versions that included two fully-connected layers of 4096 units each before the logistic regression layer. On ImageNet, Xception shows marginally better results than Inception V3. On JFT, Xception shows a 4.3% rel- ative improvement on the FastEval14k MAP@100 metric. We also note that Xception outperforms ImageNet results reported by He et al. for ResNet-50, ResNet-101 and ResNet-
1610.02357#17
1610.02357#19
1610.02357
[ "1608.04337" ]
1610.02357#19
Xception: Deep Learning with Depthwise Separable Convolutions
Table 1. Classiï¬ cation performance comparison on ImageNet (sin- gle crop, single model). VGG-16 and ResNet-152 numbers are only included as a reminder. The version of Inception V3 being benchmarked does not include the auxiliary tower. Top-1 accuracy Top-5 accuracy VGG-16 ResNet-152 Inception V3 Xception 0.715 0.770 0.782 0.790 0.901 0.933 0.941 0.945 The Xception architecture shows a much larger perfor- mance improvement on the JFT dataset compared to the ImageNet dataset. We believe this may be due to the fact that Inception V3 was developed with a focus on ImageNet and may thus be by design over-ï¬
1610.02357#18
1610.02357#20
1610.02357
[ "1608.04337" ]
1610.02357#20
Xception: Deep Learning with Depthwise Separable Convolutions
t to this speciï¬ c task. On the other hand, neither architecture was tuned for JFT. It is likely that a search for better hyperparameters for Xception on ImageNet (in particular optimization parameters and reg- Table 2. Classiï¬ cation performance comparison on JFT (single crop, single model). Inception V3 - no FC layers Xception - no FC layers Inception V3 with FC layers Xception with FC layers FastEval14k MAP@100 6.36 6.70 6.50 6.78 Figure 6. Training proï¬ le on ImageNet Xception Inception V3 ImageNet validation accuracy E © a 3 2 a a 0:50 20000 40000 60000 80000 100000 120000 Gradient descent steps
1610.02357#19
1610.02357#21
1610.02357
[ "1608.04337" ]
1610.02357#21
Xception: Deep Learning with Depthwise Separable Convolutions
Figure 7. Training proï¬ le on JFT, without fully-connected layers Xception Inception V3 5.5 FastEvali4k MAP@100 (no FC layers) 58.0 os 1.0 15 20 25 3.0 Gradient descent steps 307 ularization parameters) would yield signiï¬ cant additional improvement. # 4.5.2 Size and speed Table 3. Size and training speed comparison. Inception V3 Xception Parameter count 23,626,728 22,855,952 Steps/second 31 28 In table 3 we compare the size and speed of Inception Figure 8.
1610.02357#20
1610.02357#22
1610.02357
[ "1608.04337" ]
1610.02357#22
Xception: Deep Learning with Depthwise Separable Convolutions
Training proï¬ le on JFT, with fully-connected layers 'Xception' FastEvali4k MAP@100 (with FC layers) 580 05 1.0 15 2.0 25 3.0 Gradient descent steps V3 and Xception. Parameter count is reported on ImageNet (1000 classes, no fully-connected layers) and the number of training steps (gradient updates) per second is reported on ImageNet with 60 K80 GPUs running synchronous gradient descent. Both architectures have approximately the same size (within 3.5%), and Xception is marginally slower. We expect that engineering optimizations at the level of the depthwise convolution operations can make Xception faster than Inception V3 in the near future. The fact that both architectures have almost the same number of parameters indicates that the improvement seen on ImageNet and JFT does not come from added capacity but rather from a more efï¬ cient use of the model parameters. # 4.6. Effect of the residual connections Figure 9. Training proï¬ le with and without residual connections. 0.8 o7 Xception 06 Xception - Non-residual a a 0.4 0.3 0.2 ImageNet validation accuracy 0.1 0% 20000 40000 60000 80000 100000 120000 Gradient descent steps To quantify the beneï¬ ts of residual connections in the Xception architecture, we benchmarked on ImageNet a mod- iï¬ ed version of Xception that does not include any residual connections. Results are shown in ï¬ gure 9. Residual con- nections are clearly essential in helping with convergence, both in terms of speed and ï¬ nal classiï¬ cation performance. However we will note that benchmarking the non-residual model with the same optimization conï¬ guration as the resid- ual model may be uncharitable and that better optimization conï¬ gurations might yield more competitive results. Additionally, let us note that this result merely shows the importance of residual connections for this speciï¬ c architec- ture, and that residual connections are in no way required in order to build models that are stacks of depthwise sepa- rable convolutions.
1610.02357#21
1610.02357#23
1610.02357
[ "1608.04337" ]
1610.02357#23
Xception: Deep Learning with Depthwise Separable Convolutions
We also obtained excellent results with non-residual VGG-style models where all convolution layers were replaced with depthwise separable convolutions (with a depth multiplier of 1), superior to Inception V3 on JFT at equal parameter count. # 4.7. Effect of an intermediate activation after point- wise convolutions Figure 10. Training proï¬ le with different activations between the depthwise and pointwise operations of the separable convolution layers. 0.80 No intermediate activation Intermediate ELU Intermediate ReLU ImageNet validation accuracy 0-000 40000 60000 80000 100000 120000 140000 160000 Gradient descent steps We mentioned earlier that the analogy between depth- wise separable convolutions and Inception modules suggests that depthwise separable convolutions should potentially in- clude a non-linearity between the depthwise and pointwise operations. In the experiments reported so far, no such non- linearity was included. However we also experimentally tested the inclusion of either ReLU or ELU [3] as intermedi- ate non-linearity. Results are reported on ImageNet in ï¬ gure 10, and show that the absence of any non-linearity leads to both faster convergence and better ï¬
1610.02357#22
1610.02357#24
1610.02357
[ "1608.04337" ]
1610.02357#24
Xception: Deep Learning with Depthwise Separable Convolutions
nal performance. This is a remarkable observation, since Szegedy et al. re- port the opposite result in [21] for Inception modules. It may be that the depth of the intermediate feature spaces on which spatial convolutions are applied is critical to the usefulness those of the non-linearity: for deep feature spaces (e.g. found in Inception modules) the non-linearity is helpful, but for shallow ones (e.g. the 1-channel deep feature spaces of depthwise separable convolutions) it becomes harmful, possibly due to a loss of information.
1610.02357#23
1610.02357#25
1610.02357
[ "1608.04337" ]
1610.02357#25
Xception: Deep Learning with Depthwise Separable Convolutions
# 5. Future directions We noted earlier the existence of a discrete spectrum be- tween regular convolutions and depthwise separable convo- lutions, parametrized by the number of independent channel- space segments used for performing spatial convolutions. In- ception modules are one point on this spectrum. We showed in our empirical evaluation that the extreme formulation of an Inception module, the depthwise separable convolution, may have advantages over regular a regular Inception mod- ule. However, there is no reason to believe that depthwise separable convolutions are optimal. It may be that intermedi- ate points on the spectrum, lying between regular Inception modules and depthwise separable convolutions, hold further advantages. This question is left for future investigation. # 6. Conclusions We showed how convolutions and depthwise separable convolutions lie at both extremes of a discrete spectrum, with Inception modules being an intermediate point in be- tween. This observation has led to us to propose replacing Inception modules with depthwise separable convolutions in neural computer vision architectures. We presented a novel architecture based on this idea, named Xception, which has a similar parameter count as Inception V3. Compared to Inception V3, Xception shows small gains in classiï¬ cation performance on the ImageNet dataset and large gains on the JFT dataset. We expect depthwise separable convolutions to become a cornerstone of convolutional neural network architecture design in the future, since they offer similar properties as Inception modules, yet are as easy to use as regular convolution layers.
1610.02357#24
1610.02357#26
1610.02357
[ "1608.04337" ]
1610.02357#26
Xception: Deep Learning with Depthwise Separable Convolutions
# References [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe- mawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Van- houcke, V. Vasudevan, F. Vi´egas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng.
1610.02357#25
1610.02357#27
1610.02357
[ "1608.04337" ]
1610.02357#27
Xception: Deep Learning with Depthwise Separable Convolutions
Tensor- Flow: Large-scale machine learning on heterogeneous sys- tems, 2015. Software available from tensorï¬ ow.org. [2] F. Chollet. Keras. https://github.com/fchollet/keras, 2015. [3] D.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015. [4] K. He, X. Zhang, S. Ren, and J. Sun.
1610.02357#26
1610.02357#28
1610.02357
[ "1608.04337" ]
1610.02357#28
Xception: Deep Learning with Depthwise Separable Convolutions
Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. [5] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network, 2015. [6] A. Howard. Mobilenets: Efï¬ cient convolutional neural net- works for mobile vision applications. Forthcoming. [7] S. Ioffe and C. Szegedy.
1610.02357#27
1610.02357#29
1610.02357
[ "1608.04337" ]
1610.02357#29
Xception: Deep Learning with Depthwise Separable Convolutions
Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine Learning, pages 448â 456, 2015. [8] J. Jin, A. Dundar, and E. Culurciello. Flattened convolutional neural networks for feedforward acceleration. arXiv preprint arXiv:1412.5474, 2014. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â 1105, 2012. [10] Y. LeCun, L. Jackel, L. Bottou, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. Muller, E. Sackinger, P. Simard, et al. Learning algorithms for classiï¬
1610.02357#28
1610.02357#30
1610.02357
[ "1608.04337" ]
1610.02357#30
Xception: Deep Learning with Depthwise Separable Convolutions
cation: A comparison on handwritten digit recognition. Neural networks: the statistical mechanics perspective, 261:276, 1995. [11] M. Lin, Q. Chen, and S. Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013. [12] F. Mamalet and C. Garcia. Simplifying ConvNets for Fast Learning. In International Conference on Artiï¬ cial Neural Networks (ICANN 2012), pages 58â 65. Springer, 2012. [13] B. T. Polyak and A. B. Juditsky. Acceleration of stochas- tic approximation by averaging.
1610.02357#29
1610.02357#31
1610.02357
[ "1608.04337" ]
1610.02357#31
Xception: Deep Learning with Depthwise Separable Convolutions
SIAM J. Control Optim., 30(4):838â 855, July 1992. [14] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Ima- genet large scale visual recognition challenge. 2014. [15] L.
1610.02357#30
1610.02357#32
1610.02357
[ "1608.04337" ]
1610.02357#32
Xception: Deep Learning with Depthwise Separable Convolutions
Sifre. Rigid-motion scattering for image classiï¬ cation, 2014. Ph.D. thesis. [16] L. Sifre and S. Mallat. Rotation, scaling and deformation invariant scattering for texture discrimination. In 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, June 23-28, 2013, pages 1233â 1240, 2013. [17] N. Silberman and S.
1610.02357#31
1610.02357#33
1610.02357
[ "1608.04337" ]
1610.02357#33
Xception: Deep Learning with Depthwise Separable Convolutions
Guadarrama. Tf-slim, 2016. [18] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. [20] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â 9, 2015. [21] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna.
1610.02357#32
1610.02357#34
1610.02357
[ "1608.04337" ]
1610.02357#34
Xception: Deep Learning with Depthwise Separable Convolutions
Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. [22] T. Tieleman and G. Hinton. Divide the gradient by a run- ning average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. Accessed: 2015- 11-05. [23] V. Vanhoucke. Learning visual representations at scale. ICLR, 2014. [24] M. Wang, B. Liu, and H.
1610.02357#33
1610.02357#35
1610.02357
[ "1608.04337" ]
1610.02357#35
Xception: Deep Learning with Depthwise Separable Convolutions
Foroosh. Factorized convolutional neural networks. arXiv preprint arXiv:1608.04337, 2016. [25] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In Computer Visionâ ECCV 2014, pages 818â 833. Springer, 2014.
1610.02357#34
1610.02357
[ "1608.04337" ]
1610.01644#0
Understanding intermediate layers using linear classifier probes
8 1 0 2 v o N 2 2 ] L M . t a t s [ 4 v 4 4 6 1 0 . 0 1 6 1 : v i X r a # Understanding intermediate layers using linear classiï¬ er probes # Guillaume Alain Mila, University of Montreal [email protected] Yoshua Bengio Mila, University of Montreal # Abstract Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classiï¬
1610.01644#1
1610.01644
[ "1706.05806" ]
1610.01644#1
Understanding intermediate layers using linear classifier probes
cation. We use linear classiï¬ ers, which we refer to as â probesâ , trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of fea- tures increase monotonically along the depth of the model.
1610.01644#0
1610.01644#2
1610.01644
[ "1706.05806" ]
1610.01644#2
Understanding intermediate layers using linear classifier probes
# 1 Introduction The recent history of deep neural networks features an impressive number of new methods and technological improvements to allow the training of deeper and more powerful networks. Deep neural networks still carry some of their original reputation of being black boxes, but many efforts have been made to understand better what they do, what is the role of each layer (Yosinski et al., 2014), how we can interpret them (Zeiler and Fergus, 2014) and how we can fool them (Biggio et al., 2013; Szegedy et al., 2013). In this paper, we take the features of each layer separately and we ï¬ t a linear classiï¬ er to predict the original classes. We refer to these linear classiï¬ ers as â probesâ and we make sure that we never inï¬ uence the model itself by taking measurements with probes. We suggest that the reader think of those probes as thermometers used to measure the temperature simultaneously at many different locations. More broadly speaking, the core of the idea is that there are interesting quantities that we can report based on the features of many independent layers if we allow the â measuring instrumentsâ to have their own trainable parameters (provided that they do not inï¬ uence the model itself). In the context of this paper, we are working with convolutional neural networks on image classiï¬ ca- tion tasks on the MNIST and ImageNet (Russakovsky et al., 2015) datasets. Naturally, we ï¬ t linear classiï¬ er probes to predict those classes, but in general it is possible to monitor the performance of the features on any other objective.
1610.01644#1
1610.01644#3
1610.01644
[ "1706.05806" ]
1610.01644#3
Understanding intermediate layers using linear classifier probes
Our contributions in this paper are twofold. Firstly, we introduce these â probesâ as a general tool to understand deep neural networks. We show how they can be used to characterize different layers, to debug bad models, or to get a sense of how the training is progressing in a well-behaved model. While our proposed idea shares commonalities with Montavon et al. (2011), our analysis is very different. Secondly, we observe that the measurements of the probes are surprizingly monotonic, which means that the degree of linear separability of the features of layers increases as we reach the deeper layers. The level of regularity with which this happens is surprizing given that this is not technically part of the training objective. This helps to understand the dynamics of deep neural networks. # 2 Related Work Many researchers have come up with techniques to analyze certain aspects of neural networks which may guide our intuition and provide a partial explanation as to how they work. In this section we will provide a survey of the literature on the subject, with a little more focus on papers related our current work.
1610.01644#2
1610.01644#4
1610.01644
[ "1706.05806" ]
1610.01644#4
Understanding intermediate layers using linear classifier probes
# 2.1 Linear classiï¬ cation with kernel PCA In our paper we investigate the linear separability of the features found at intermediate layers of a deep neural network. A similar starting point is presented by Montavon et al. (2011). In that particular case, the authors use kernel PCA to project the features of a given layer onto a new representation which will then be used to ï¬ t the best linear classiï¬ er. They use a radial basis function as kernel, and they choose to project the features of individual layers by using the d leading eigenvectors of the kernel PCA decomposition. They investigate the effects that d has on the quality of the linear classiï¬
1610.01644#3
1610.01644#5
1610.01644
[ "1706.05806" ]
1610.01644#5
Understanding intermediate layers using linear classifier probes
er. Naturally, for a sufï¬ ciently large d, it would be possible to overï¬ t on the training set (given how easy this is with a radial basis function), so they consider the situation where d is relatively small. They demonstrate that, for deeper layers in a neural network, they can achieve good performance with smaller d. This suggests that the features of the original convolution neural network are indeed more â abstractâ as we go deeper, which corresponds to the general intuition shared by many researchers. They explore convolution networks of limited depth with a restricted subset of 10k training samples of MNIST and CIFAR-10. # 2.2 Generalization and transferability of layers
1610.01644#4
1610.01644#6
1610.01644
[ "1706.05806" ]
1610.01644#6
Understanding intermediate layers using linear classifier probes
There are good arguments to support the claim that the ï¬ rst layers of a convolution network for image recognition contain ï¬ lters that are relatively â generalâ , in the sense that they would work great even if we switched to an entirely different dataset of images. The last layers are speciï¬ c to the dataset being used, and have to be retrained when using a different dataset. In Yosinski et al. (2014) the authors try to pinpoint the layer at which this transition occurs, but they show that the exact transition is spread across multiple layers. In Donahue et al. (2014) the authors study the transfer of features from the last few layers of a model to a novel generic task. In Zeiler and Fergus (2014) the authors show that the ï¬ lters are picking up certain patterns that make sense to us visually, and they show a method to visually inspect the ï¬ lters as input images.
1610.01644#5
1610.01644#7
1610.01644
[ "1706.05806" ]
1610.01644#7
Understanding intermediate layers using linear classifier probes
# 2.3 Relevance Propagation In Bach et al. (2015), the authors introduce the idea of Relevance Propagation as a way to identify which pixels of the input space are the most important to the classiï¬ er on the ï¬ nal layer. Their approach frames the â relevanceâ as a kind of quantity that is to be preserved across the layers, as a sort of shared responsibility to be divided among the features of a given layer. In Binder et al. (2016) the authors apply the concept of Relevance Propagation to a larger family of models. Among other things, they provide a nice experiment where they study the effects of corrupting the pixels deemed the most relevant, and they show how this affects performance more than corrupting randomly-selected pixels (see Figure 2 of their paper). See also Lapuschkin et al. (2016). Other research dealing with Relevance Propagation includes Arras et al. (2017) where this is applied to RNN in text. We would also note that a good number of papers on interpretability of neural networks deals with â interpretationsâ taking the form of regions of the original image being identiï¬ ed, or where the 2
1610.01644#6
1610.01644#8
1610.01644
[ "1706.05806" ]
1610.01644#8
Understanding intermediate layers using linear classifier probes
pixels in the original image receive a certain value of how relevant they are (e.g. a heat map of relevance). In those cases we rely on the human user to parse the regions of the image with their vision so as to determine whether the region indeed makes sense or whether the information contained within is irrelevant to the task at hand. This is analogous to the way that image-captioning attention (Xu et al., 2015) can highlight portions of the input image that inspired speciï¬ c segments of the caption. An interesting approach is presented in Mahendran and Vedaldi (2015, 2016); Dosovitskiy and Brox (2016) where the authors analyze the set of â equivalentâ inputs in the sense that some of the features total at a given layer should be preserved. Given a layer to study, they apply a regularizer (e.g. variation) and use gradient descent in order to reconstruct the pre-image that yields the same features at that layer, but for which the regularizer would be minimized. This procedure yields pre-images that are of the same format as the input image, and which can be used to get a sense of what are the components of the original image that are preserved. For certain tasks, one may be surprised as to how many details of the input image are being completely discarded by the time we reach the fully-connected layers at the end of a convolution neural network.
1610.01644#7
1610.01644#9
1610.01644
[ "1706.05806" ]
1610.01644#9
Understanding intermediate layers using linear classifier probes
# 2.4 SVCCA In Raghu et al. (2017a,b) the authors study the question of whether neural networks are trained from the ï¬ rst to the last layer, or the other way around (i.e. â bottom upâ vs â top downâ ). The concept is rather intuitive, but it still requires a proper deï¬ nition of what they mean. They use Canonical Correlation Analysis (CCA) to compare two instances of a given model trained separately. Given that two different instances of the same model might assign entirely different roles to their neurons (on corresponding layers), this is a comparison that is normally impossible to even attempt. On one side, they take a model that has already been optimized. On the other side, they take multiple snapshots of a model during training. Every layer of one model is being compared with every other layer of the other. The values computed by CCA allows them to report the correlation between every pair of layers. This shows how quickly a given layer of the model being trained is going to achieve a conï¬ guration equivalent to the one of the optimized model. They ï¬ nd that the early layers reach their ï¬ nal conï¬ guration, so to speak, much earlier than layers downstream. Given that any two sets of features can be compared using CCA, they also compare the correlation between any intermediate layer and the ground truth. This gives a sense of how easy it would be to predict the target label using the features of any intermediate layer instead of only using the last layer (as convnet usually do). Refer to Figure 6 of Raghu et al. (2017b) for more details. This aspect of Raghu et al. (2017b) is very similar to our own previous work (Alain and Bengio, 2016). # 3 Monitoring with probes # Information theory, and monotonic improvements to linear separability The initial motivation for linear classiï¬ er probes was related to a reï¬ ection about the nature of information (in the entropy sense of the word) passing from one layer to the next. New information is never added as we propagate forward in a model. If we consider the typical image classiï¬ cation problem, the representation of the data is transformed over the course of many layers, to be ï¬ nally used by a linear classiï¬ er at the last layer.
1610.01644#8
1610.01644#10
1610.01644
[ "1706.05806" ]
1610.01644#10
Understanding intermediate layers using linear classifier probes
In the case of a binary classiï¬ er (say, detecting the presence or absence of a lion in a picture of the savannah like in Figure 1), we could say that there was at most one bit of information to be uncovered in the original image. Lion or no lion ? Here we are not interested in measuring the information about the pixels of an image that we want to reconstruct. That would be a different problem. This is illustrated in a formal way by the Data Processing Inequality. It states that, for a set of three random variables satisfying the dependency X â Y â Z then we have that I(X; Z) â ¤ I(X; Y )
1610.01644#9
1610.01644#11
1610.01644
[ "1706.05806" ]
1610.01644#11
Understanding intermediate layers using linear classifier probes
3 where I(X, Y ) is the mutual information. e1n90|32 2 45 70 a7 SB EF 53 BI 32 @tach|32 7A CP 3E DB 7D 31 4D 99 BB elatal@e cb 2 1D ba 9F BC 2F 50 EF eipse|12 ee 6F SF 73 21 Op 7F BA De 17 14 6p 25 SE 7A 91 7: sipe7|z3 23 3a AC EA A AO 55 eibps|27 ca 9a 74 21 St AT 68 eibes|54 7P 48 38 E6 30 5A DT sicié|27 50 05 p2 32 Fa Fé Ag Os co cB eicts|ag cB 74 4D 78 31 85 Ce cl aD 34 8ic72|R0 Fe 47 1D D7 AS EB BI BO BO ED BF 13 1 96 AB FA 65 9B AE? eicai|20 co 8B D3 98 C6 GB SE 63 CB F7 65 22 BF 42 5A 44 4 90 21 49 0 @iedd|1a on SD ED A} 69 A9 65 BY C2 S415 a2 24 09 DF 67 D7 DR 91 38 Bi eicte|cs ze 43 SE 2p 59 DE DA 76 42 2a 52 47 1D 80 27 OD TE BO IF D3 DA DT eiaze|09 FD FA 6C GD 78 44 27 85 ED 00 C7 e1asalce dc ag 32 52 BE 55 CE DE BB £3 Dd Slate|51 $F 89 02 7D B1 D3 45 83 17 95 BD 70 eiapb|62 ee 5 iF 1c 99 1B 01 5p 96 81 2 BldeaAP ec 35 19 42 AB 25 8c PO e1e19|99 20 2D aS Ee DE 8A BA 24 14 7E D3 D125 2C Ad 13 Cl 29 D3 09 32 D3 Bled8l56 cc BA AA 57 9E OD #A 67 11 AD 71 04 05 7A GF 4F FS BI DP 66 £3 9C
1610.01644#10
1610.01644#12
1610.01644
[ "1706.05806" ]
1610.01644#12
Understanding intermediate layers using linear classifier probes
(a) hex dump of picture of a lion (b) same lion in human-readable format Figure 1: The hex dump represented at the left has more information contents than the image at the right. Only one of them can be processed by the human brain in time to save their lives. Computational convenience matters. Not just entropy. The task of a deep neural network classiï¬ er is to come up with a representation for the ï¬ nal layer that can be easily fed to a linear classiï¬ er (i.e. the most elementary form of useful classiï¬ er).
1610.01644#11
1610.01644#13
1610.01644
[ "1706.05806" ]
1610.01644#13
Understanding intermediate layers using linear classifier probes
The cross-entropy loss applies a lot of pressure directly on the last layer to make it linearly separable. Any degree of linear separability in the intermediate layers happens only as a by-product. On one hand, we have that every layer has less information than its parent layer. On the other hand, we observe experimentally in Section 3.5, 4.1 and 4.2 that features from deeper layers work better with linear classiï¬ ers to predict the target labels.
1610.01644#12
1610.01644#14
1610.01644
[ "1706.05806" ]
1610.01644#14
Understanding intermediate layers using linear classifier probes
At ï¬ rst glance this might seem like a contradiction. One of the important lessons is that neural networks are really about distilling computationally- useful representations, and they are not about information contents as described by the ï¬ eld of Information Theory. # 3.2 Linear classiï¬ er probes Consider the common scenario in deep learning in which we are trying to classify the input data X to produce an output distribution over D classes. The last layer of the model is a densely-connected map to D values followed by a softmax, and we train by minimizing cross-entropy. At every layer we can take the features Hk from that layer and try to predict the correct labels y using a linear classiï¬ er parameterized as fe: Hy > (0,1)? hy +> softmax (Why + b) . where hk â H are the features of hidden layer k, [0, 1]D is the space of categorical distributions of the D target classes, and (W, b) are the probe weights and biases to be learned so as to minimize the usual cross-entropy loss. Let Ltrain k deï¬ ne Lvalid be the empirical loss of that linear classiï¬ er fk evaluated over the training set. We can also k by exporting the same linear classiï¬ er on the validation and test sets. # k Without making any assumptions about the model itself being trained, we can nevertheless assume that these fk are themselves optimized so that, at any given time, they reï¬ ect the currently optimal thing that can be done with the features present. We refer to those linear classiï¬ ers as â probesâ
1610.01644#13
1610.01644#15
1610.01644
[ "1706.05806" ]
1610.01644#15
Understanding intermediate layers using linear classifier probes
in an effort to clarify our thinking about the model. These probes do not affect the model training. They only measure the level of linear separability of the features at a given layer. Blocking the backpropagation from the probes to the model itself can be achieved by using tf.stop gradient in Tensorï¬ ow (or its Theano equivalent), or by managing the probe parameters separately from the model parameters. Note that we can avoid the issue of local minima because training a linear classiï¬ er using softmax cross-entropy is a convex problem.
1610.01644#14
1610.01644#16
1610.01644
[ "1706.05806" ]
1610.01644#16
Understanding intermediate layers using linear classifier probes
4 In this paper, we study how Lk decreases as k increases (see Section 3.1), â ¢ the usefulness of Lk as a diagnostic tool (see Section 5.1). # 3.3 Practical concern : Ltrain # k # vs Lvalid k The reason why we care about optimality of the probes in Section 3.2 is because it abstracts away the problem of optimizing them. When a general function g(x) has a unique global minimum, we can talk about that minimum without ambiguity even though, in practice, we are probably going to use only a convenient approximation of the minimum. This is acceptable in a context where we are seeking better intuition about deep learning models by using linear classiï¬
1610.01644#15
1610.01644#17
1610.01644
[ "1706.05806" ]
1610.01644#17
Understanding intermediate layers using linear classifier probes
er probes. If a researcher judges that the measurements are useful to further their understanding of their model (and act on that intuition), then they should not worry too much about how close they are to optimality. This applies also to the question of whether we should prioritize Ltrain Lvalid k might not be easy to track Lvalid # k Moreover, for the purposes of many of the experiments in this paper we chose to report the classi- ï¬ cation error instead of the cross-entropy, since this is ultimately often the quantity that matters the most. Reporting the top5 classiï¬ cation error could also have been possible. # 3.4 Practical concern :
1610.01644#16
1610.01644#18
1610.01644
[ "1706.05806" ]
1610.01644#18
Understanding intermediate layers using linear classifier probes
Dimension reduction on features Another practical problem can arise when certain layers of a neural network have an exceedingly large quantity of features. The ï¬ rst few layers of Inception v3, for example, have a few million features when we multiply height, width and channels. This leads to parameters for a single probe taking upwards of a few gigabytes of storage, which is disproportionately large when we consider that the entire set of model parameters takes less space than that. In those cases, we have three possible suggestions for trimming down the space of features on which we ï¬ t the probes. â
1610.01644#17
1610.01644#19
1610.01644
[ "1706.05806" ]
1610.01644#19
Understanding intermediate layers using linear classifier probes
¢ Use only a random subset of the features (but always the same ones). This is used on the Inception v3 model in Section 4.2. â ¢ Project the features to a lower-dimensional space. Learn this mapping. This is probably a worse idea than it sounds because the projection matrix itself can take a lot of storage (even more than the probe parameters). â ¢ When dealing with features in the form of images (height, width, channels), we can perform 2D pooling along the (height, width) of each channel. This reduces the number of features to the number of channels. This is used on the ResNet-50 model in Section 4.1. In practice, when using linear classiï¬ er probes on any serious model (i.e. not MNIST) we have to choose a way to reduce the number of features used. Note that we also want to avoid a situation where our probes are simply overï¬ tting on the features because there are too many features. It was recently demonstrated that very large models can ï¬ t random labels on ImageNet (Zhang et al., 2016).
1610.01644#18
1610.01644#20
1610.01644
[ "1706.05806" ]
1610.01644#20
Understanding intermediate layers using linear classifier probes
This is a situation that we want to avoid because the probe measurements would be entirely meaningless in that situation. Dimensionality reduction helps with this concern. # 3.5 Basic example on MNIST In this section we run the MNIST convolutional model provided by the tensorflow/models github repository (image/mnist/convolutional.py). We selected that model for reproducibility and to demonstrate how to easily peek into popular models by using probes. We start by sketching the model in Figure 2. We report the results at the beginning and the end of training on Figure 3.
1610.01644#19
1610.01644#21
1610.01644
[ "1706.05806" ]
1610.01644#21
Understanding intermediate layers using linear classifier probes
One of the interesting dynamics to be observed there is how useful the ï¬ rst 5 layers are, despite the fact that the model is completely untrained. Random projections can be useful to classify data, and this has been studied by others (Jarrett et al., 2009). conv 5x5 maxpool conv 5x5 maxpool 32 filters ReLU 2x2 64 filters ReLU 2x2 matmul ReLU matmul input output images logits convolution convolution fully-connected fully-connected layer layer layer layer Figure 2: This graphical model represents the neural network that we are going to use for MNIST. The model could be written in a more compact form, but we represent it this way to expose all the locations where we are going to insert probes. The model itself is simply two convolutional layers followed by two fully-connected layer (one being the ï¬
1610.01644#20
1610.01644#22
1610.01644
[ "1706.05806" ]
1610.01644#22
Understanding intermediate layers using linear classifier probes
nal classiï¬ er). However, we insert probes on each side of each convolution, activation function, and pooling function. This is a bit overzealous, but the small size of the model makes this relatively easy to do. (a) After initialization, no training. (b) After training for 10 epochs. Figure 3: We represent here the test prediction error for each probe, at the beginning and at the end of training. This measurement was obtained through early stopping based on a validation set of 104 elements. The probes are prevented from overï¬ tting the training data. We can see that, at the beginning of training (on the left), the randomly-initialized layers were still providing useful trans- formations. The test prediction error goes from 8% to 2% simply using those random features.
1610.01644#21
1610.01644#23
1610.01644
[ "1706.05806" ]
1610.01644#23
Understanding intermediate layers using linear classifier probes
The biggest impact comes from the ï¬ rst ReLU. At the end of training (on the right), the test prediction error is improving at every layer (with the exception of a minor kink on fc1 preact). # 3.6 Other objectives Note that it would be entirely possible to use linear classiï¬ er probes on a different set of labels. For the same reason as it is possible to transfer many layers from one vision task to another (e.g. with different classes), we are not limited to ï¬ tting probes using the same domain. Inserting probes at many different layers of a model is essentially a way to ask the following ques- tion: Is there any information about factor present in this part of the model ? # 4 Experiments with popular models # 4.1 ResNet-50 The family of ResNet models (He et al.|/2016) are characterized by their large quantities of residual layers mapping essentially x > «+ r(x Se hay have been very successful and there are various
1610.01644#22
1610.01644#24
1610.01644
[ "1706.05806" ]
1610.01644#24
Understanding intermediate layers using linear classifier probes
6 papers seeking to understand better how they work (Veit et al., 2016; Larsson et al., 2016; Singh et al., 2016). Here we are going to show how linear classiï¬ er probes might be able to help us a little to shed some light into the ResNet-50 model. We used the pretrained model from the github repo (fchollet/deep-learning-models) of the author of Keras (Chollet et al., 2015). One of the questions that comes up when discussing ResNet models is whether the successive layers are essentially performing the same operation over many times, reï¬ ning the representation just a little more each time, or whether there is a more fundamental change of representation happening. In particular, we can point to certain places in ResNet-50 where the image size diminishes and we increase the number of channels. This happens at three places in the model (identiï¬ ed with blank lines in Table 4a).
1610.01644#23
1610.01644#25
1610.01644
[ "1706.05806" ]
1610.01644#25
Understanding intermediate layers using linear classifier probes
layer name topology probe valid prediction error input 1 (224, 224, 3) 0.99 add 1 add 2 add 3 (28, 28, 256) (28, 28, 256) (28, 28, 256) 0.94 0.89 0.88 add 4 add 5 add 6 add 7 add 8 add 9 add 10 add 11 add 12 add 13 add 14 add 15 add 16 (28, 28, 512) (28, 28, 512) (28, 28, 512) (28, 28, 512) (14, 14, 1024) (14, 14, 1024) (14, 14, 1024) (14, 14, 1024) (14, 14, 1024) (14, 14, 1024) (7, 7, 2048) (7, 7, 2048) (7, 7, 2048) 0.87 0.82 0.79 0.76 0.77 0.69 0.67 0.62 0.57 0.51 0.41 0.39 0.31 7 == == model top layer os GLUT] PIL H ex punsneas igi pioateny gap iseaQiQaseray eis{esipsaieeaegipiesiQa4t) 02 valid prediction error a se vee eyy - ei Ej a | 3 2 (a) Validation errors for probes. layers. Comparing Pre-trained on ImageNet dataset. # different ResNet-50 (b) Inserting probes at meaningful layers of ResNet-50.
1610.01644#24
1610.01644#26
1610.01644
[ "1706.05806" ]
1610.01644#26
Understanding intermediate layers using linear classifier probes
This plot shows the rightmost column of the table in Figure 4a. Reporting the validation error for probes (magenta) and comparing it with the validation error of the pre-trained model (green). Figure 4: For the ResNet-50 model trained on ImageNet, we can see deeper features are better at predicting the output classes. More importantly, the relationship between depth and validation prediction error is almost perfectly monotonic. This suggests a certain â greedyâ aspect of the repre- sentations used in deep neural networks. This property is something that comes naturally as a result of conventional training, and it is not due to the insertion of probes in the model. # 4.2 Inception v3 We have performed an experiment using the Inception v3 model on the ImageNet dataset (Szegedy et al., 2015; Russakovsky et al., 2015). We show using colors in Figure 5 how the predictive error of each layer can be measured using probes. This can be computed at many different times of training, but here we report only after minibatch 308230, which corresponds to about 2 weeks of training.
1610.01644#25
1610.01644#27
1610.01644
[ "1706.05806" ]
1610.01644#27
Understanding intermediate layers using linear classifier probes
7 This model has a few particularities, one of which is that it features an auxiliary branch that con- tributes to training the model (it can be discarded afterwards, but not necessarily). We wanted to investigate whether this branch is â leading trainingâ , in the sense that its classiï¬ er might have lower prediction error than the main head for the ï¬ rst part of the training. This is something that we conï¬ rmed by looking at the prediction errors for the probes, but the difference was not very large. The auxiliary branch was ahead of the main branch by just a little. The smooth gradient of colors in Figure 5 shows how the linear separability increases monotonically as we probe layers deeper into the network. Refer to the Appendix Section C for a comparison at four different moments of training, and for some more details about how we reduced the dimensionality of the feature to make this more tractable. Te 0.0 probe training error 1.0 308230 main head auxiliary head Figure 5: Inception v3 model after 2 weeks of training.
1610.01644#26
1610.01644#28
1610.01644
[ "1706.05806" ]
1610.01644#28
Understanding intermediate layers using linear classifier probes
Red is bad (high prediction error) and green/blue is good (low prediction error). The smooth color gradient shows a very gradual transition in the degree of linear separability (almost perfectly monotonic). # 5 Diagnostics for failing models # 5.1 Pathological behavior on skip connections In this section we show an example of a situation where we can use probes to diagnose a training problem as it is happening. We purposefully selected a model that was pathologically deep so that it would fail to train under normal circumstances. We used 128 fully-connected layers of 128 hidden units to classify MNIST, which is not at all a model that we would recommend. We thought that something interesting might happen if we added a very long skip connection that bypasses the ï¬ rst half of the model completely (Figure 6a). With that skip connection, the model became trainable through the usual SGD. Intuitively, we thought that the latter portion of the model would see use at ï¬ rst, but then we did not know whether the ï¬ rst half of the model would then also become useful. Using probes we show that this solution was not working as intended, because half of the model stays unused. The weights are not zero, but there is no useful signal passing through that segment. The skip connection left a dead segment and skipped over it. The lesson that we want to show the reader is not that skip connections are bad.
1610.01644#27
1610.01644#29
1610.01644
[ "1706.05806" ]
1610.01644#29
Understanding intermediate layers using linear classifier probes
Our goal here is to show that linear classiï¬ cation probes are a tool to understand what is happening internally in such situations. Sometimes the successful minimization of a loss fails to capture important details. # 6 Discussion and future work We have presented a combination of both a small convnet on MNIST and larger popular convnets Inception v3 and ResNet-50. It would be nice to continue this work and look at ResNet-101, ResNet- 151, VGG-16 and VGG-19. A similar thing could be done with popular RNNs also.
1610.01644#28
1610.01644#30
1610.01644
[ "1706.05806" ]
1610.01644#30
Understanding intermediate layers using linear classifier probes
To apply linear classiï¬ er probes to a different context, we could also try any setting where either Gen- erative Adversarial Networks (Goodfellow et al., 2014) or adversarial examples are used (Szegedy et al., 2013). 8 ae (a) Model with 128 layers. A skip connec- tion goes from the beginning straight to the middle of the graph. # (b) probes after 500 mini- batches (c) probes after 2000 mini- batches Figure 6: Pathological skip connection being diagnosed. Refer to Appendix Section A for explana- tions about the special notation for probes using the â diodeâ symbol.
1610.01644#29
1610.01644#31
1610.01644
[ "1706.05806" ]
1610.01644#31
Understanding intermediate layers using linear classifier probes
The idea of multi-layer probes has been suggested to us on multiple occasions. This could be seen as a natural extension of the linear classiï¬ er probes. One downside to this idea is that we lose the convexity property of the probes. It might be worth pursuing in a particular setting, but as of now we feel that it is premature to start using multi-layer probes. This also leads to the convoluted idea of having a regular probe inside a multi-layer probe. One completely new direction would be to train a model in a way that actively discourages certain internal layers to be useful to linear classiï¬
1610.01644#30
1610.01644#32
1610.01644
[ "1706.05806" ]
1610.01644#32
Understanding intermediate layers using linear classifier probes
ers. What would be the consequences of this constraint? Would it handicap a given model or would the model simply adjust without any trouble? At that point, we are no longer dealing with non-invasive probes, but we are feeding a strange kind of signal back to the model. Finally, we think that it is rather interesting that the probe prediction errors are almost perfectly monotonically decreasing. We suspect that this warrants a deeper investigation into the reasons why that it happens, and it may lead to the discovery of fundamental concepts to understand better deep neural networks (in relation to their optimization). This is connected to the work done by Jastrzebski et al. (2017). # 7 Conclusion In this paper we introduced the concept of the linear classiï¬ er probe as a conceptual tool to better understand the dynamics inside a neural network and the role played by the individual intermediate layers. We have observed experimentally that an interesting property holds : the level of linear separabil- ity increases monotonically as we go to deeper layers. This is purely an indirect consequence of enforcing this constraint on the last layer. We have demonstrated how these probes can be used to identify certain problematic behaviors in models that might not be apparent when we traditionally have access to only the prediction loss and error. We are now able to ask new questions and explore new areas. We hope that the notions presented in this paper can contribute to the understanding of deep neural networks and guide the intuition of researchers that design them. # Acknowledgments Yoshua Bengio is a senior CIFAR Fellow. The authors would like to acknowledge the support of the following agencies for research funding and computing support: NSERC, FQRNT, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs and CIFAR. Thanks to Nicolas Ballas for fruitful discussions, to Reyhane Askari and Mohammad Pezeshki for proofreading and comments, and to all the reviewers for their comments.
1610.01644#31
1610.01644#33
1610.01644
[ "1706.05806" ]
1610.01644#33
Understanding intermediate layers using linear classifier probes
9 # References Alain, G. and Bengio, Y. (2016). Understanding intermediate layers using linear classiï¬ er probes. arXiv preprint arXiv:1610.01644. Arras, L., Montavon, G., M¨uller, K.-R., and Samek, W. (2017). Explaining recurrent neural network predictions in sentiment analysis. arXiv preprint arXiv:1706.07206. Bach, S., Binder, A., Montavon, G., Klauschen, F., M¨uller, K.-R., and Samek, W. (2015). On pixel- wise explanations for non-linear classiï¬ er decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140. Biggio, B., Corona, I., Maiorca, D., Nelson, B., Ë Srndi´c, N., Laskov, P., Giacinto, G., and Roli, F. (2013).
1610.01644#32
1610.01644#34
1610.01644
[ "1706.05806" ]
1610.01644#34
Understanding intermediate layers using linear classifier probes
Evasion attacks against machine learning at test time. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 387â 402. Springer. Binder, A., Montavon, G., Lapuschkin, S., M¨uller, K.-R., and Samek, W. (2016). Layer-wise rele- vance propagation for neural networks with local renormalization layers. In International Con- ference on Artiï¬ cial Neural Networks, pages 63â 71. Springer. Chollet, F. et al. (2015). Keras. https://github.com/fchollet/keras. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., and Darrell, T. (2014).
1610.01644#33
1610.01644#35
1610.01644
[ "1706.05806" ]
1610.01644#35
Understanding intermediate layers using linear classifier probes
Decaf: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning, pages 647â 655. Dosovitskiy, A. and Brox, T. (2016). Inverting visual representations with convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4829â 4837. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems, pages 2672â 2680. He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770â
1610.01644#34
1610.01644#36
1610.01644
[ "1706.05806" ]
1610.01644#36
Understanding intermediate layers using linear classifier probes
778. Jarrett, K., Kavukcuoglu, K., Lecun, Y., et al. (2009). What is the best multi-stage architecture for object recognition? In 2009 IEEE 12th International Conference on Computer Vision, pages 2146â 2153. IEEE. Jastrzebski, S., Arpit, D., Ballas, N., Verma, V., Che, T., and Bengio, Y. (2017). Residual connections encourage iterative inference. arXiv preprint arXiv:1710.04773. Lapuschkin, S., Binder, A., Montavon, G., M¨uller, K.-R., and Samek, W. (2016). Analyzing clas- In Proceedings of the IEEE Conference on siï¬
1610.01644#35
1610.01644#37
1610.01644
[ "1706.05806" ]
1610.01644#37
Understanding intermediate layers using linear classifier probes
ers: Fisher vectors and deep neural networks. Computer Vision and Pattern Recognition, pages 2912â 2920. Larsson, G., Maire, M., and Shakhnarovich, G. (2016). Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648. Mahendran, A. and Vedaldi, A. (2015). Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5188â
1610.01644#36
1610.01644#38
1610.01644
[ "1706.05806" ]
1610.01644#38
Understanding intermediate layers using linear classifier probes
5196. Mahendran, A. and Vedaldi, A. (2016). Visualizing deep convolutional neural networks using natural pre-images. International Journal of Computer Vision, 120(3), 233â 255. Montavon, G., Braun, M. L., and M¨uller, K.-R. (2011). Kernel analysis of deep networks. Journal of Machine Learning Research, 12(Sep), 2563â 2581. Raghu, M., Yosinski, J., and Sohl-Dickstein, J. (2017a). Bottom up or top down? dynamics of deep representations via canonical correlation analysis. arxiv.
1610.01644#37
1610.01644#39
1610.01644
[ "1706.05806" ]
1610.01644#39
Understanding intermediate layers using linear classifier probes
10 Raghu, M., Gilmer, J., Yosinski, J., and Sohl-Dickstein, J. (2017b). Svcca: Singular vector canonical correlation analysis for deep understanding and improvement. arXiv preprint arXiv:1706.05806. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3), 211â
1610.01644#38
1610.01644#40
1610.01644
[ "1706.05806" ]
1610.01644#40
Understanding intermediate layers using linear classifier probes
252. Singh, S., Hoiem, D., and Forsyth, D. (2016). Swapout: Learning an ensemble of deep architectures. In Advances In Neural Information Processing Systems, pages 28â 36. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â
1610.01644#39
1610.01644#41
1610.01644
[ "1706.05806" ]
1610.01644#41
Understanding intermediate layers using linear classifier probes
9. Veit, A., Wilber, M. J., and Belongie, S. (2016). Residual networks behave like ensembles of relatively shallow networks. In Advances in Neural Information Processing Systems, pages 550â 558. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., and Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pages 2048â 2057. Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. (2014).
1610.01644#40
1610.01644#42
1610.01644
[ "1706.05806" ]
1610.01644#42
Understanding intermediate layers using linear classifier probes
How transferable are features in deep neural networks? In Advances in neural information processing systems, pages 3320â 3328. Zeiler, M. D. and Fergus, R. (2014). Visualizing and understanding convolutional networks. European conference on computer vision, pages 818â 833. Springer. In Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. (2016). Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530.
1610.01644#41
1610.01644#43
1610.01644
[ "1706.05806" ]
1610.01644#43
Understanding intermediate layers using linear classifier probes
# A Diode notation We have the following suggestion for extending traditional graphical models to describe where probes are being inserted in a model. See Figure 7. Due to the fact that probes do not contribute to backpropagation, but they still consume the features during the feed-forward step, we thought that borrowing the diode symbol from electrical engineer- ing might be a good idea. A diode is a one-way valve for electrical current. This notation could be useful also outside of this context with probes, whenever we want to sketch a graphical model and highlight the fact that the gradient backpropagation signal is being blocked.
1610.01644#42
1610.01644#44
1610.01644
[ "1706.05806" ]
1610.01644#44
Understanding intermediate layers using linear classifier probes
sooner Figure 7: Probes being added to every layer of a model. These additional probes are not supposed to change the training of the model, so we add a little diode symbol through the arrows to indicate that the gradients will not backpropagate through those connections. 11 # B Training probes with ï¬ nished model Sometimes we do not care about measuring the probe losses/accuracy during training, but we have a model that is already trained and we want to report the measurements on that static model. In that case, it is worth considering whether we really want to augment the model by adding the probes and training the probes by iterating through the training set. Sometimes the model itself is computationally expensive to run and we can only do 150 images per second. If we have to do multiple passes over the training set in order to train probes, then it might be more efï¬ cient to run the whole training set and extract the features to the local hard drive. Experimentally, in the case for the pre-trained model Resnet-50 (Section 4.1) we found that we could process approximately 100 training samples per second when doing forward propagation, but we could run through 6000 training samples per second when reading from the local hard drive. This makes it a lot easier to do multiple passes over the training set.
1610.01644#43
1610.01644#45
1610.01644
[ "1706.05806" ]
1610.01644#45
Understanding intermediate layers using linear classifier probes
# C Inception v3 In Section 3.4 we showed results from an experiment using the Inception v3 model on the ImageNet dataset (Szegedy et al., 2015; Russakovsky et al., 2015). The results shown were taken from the last training step only. Here we provide in Figure 8 a sketch of the original Inception v3 model, and in Figure 9 we show results from 4 particular moments during training. These are spread over the 2 weeks of training so that we can get a sense of progression. Figure 8: Sketch of the Inception v3 model. Note the structure with the â auxiliary headâ at the bottom, and the â inception modulesâ with a common topology represented as blocks that have 3 or 4 sub-branches. As discussed in Section 3.4, we had to resort to a technique to limit the number of features used by the linear classiï¬ er probes. In this particular experiment, we have had the most success by taking 1000 random features for each probe. This gives certain layers an unfair advantage if they start with 4000 features and we kept 1000, whereas in other cases the probe insertion point has 426, 320 features and we keep 1000.
1610.01644#44
1610.01644#46
1610.01644
[ "1706.05806" ]
1610.01644#46
Understanding intermediate layers using linear classifier probes
There was no simple â fairâ solution. That being said, 13 out of the 17 probes have more than 100, 000 features, and 11 of those probes have more than 200, 000 features, so things were relatively comparable. 12 Inception v3 7 main head probe training prediction error minibatches auxiliary head main head minibatches 050389 auxiliary head main head minibatches 100876 auxiliary head main head minibatches 308230 auxiliary head Figure 9: Inserting a probe at multiple moments during training the Inception v3 model on the ImageNet dataset. We represent here the prediction error evaluated at a random subset of 1000 features.
1610.01644#45
1610.01644#47
1610.01644
[ "1706.05806" ]
1610.01644#47
Understanding intermediate layers using linear classifier probes
As expected, at ï¬ rst all the probes have a 100% prediction error, but as training progresses we see that the model is getting better. Note that there are 1000 classes, so a prediction error of 50% is much better than a random guess. The auxiliary head, shown under the model, was observed to have a prediction error that was slightly better than the main head. This is not necessarily a condition that will hold at the end of training, but merely an observation. Red is bad (high prediction error) and green/blue is good (low prediction error).
1610.01644#46
1610.01644#48
1610.01644
[ "1706.05806" ]
1610.01644#48
Understanding intermediate layers using linear classifier probes
13
1610.01644#47
1610.01644
[ "1706.05806" ]
1609.09106#0
HyperNetworks
6 1 0 2 c e D 1 ] G L . s c [ 4 v 6 0 1 9 0 . 9 0 6 1 : v i X r a # HYPERNETWORKS David Ha; Andrew Dai, Quoc V. Le Google Brain {hadavid, adai, qvl1}@google.com # ABSTRACT This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernet- works provide an abstraction that is similar to what is found in nature: the relation- ship between a genotype â the hypernetwork â and a phenotype â the main net- work. Though they are also reminiscent of HyperNEAT in evolution, our hyper- networks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as re- laxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art re- sults on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.
1609.09106#1
1609.09106
[ "1603.09025" ]
1609.09106#1
HyperNetworks
# 1 INTRODUCTION In this work, we consider an approach of using a small network (called a â hypernetwork") to generate the weights for a larger network (called a main network). The behavior of the main network is the same with any usual neural network: it learns to map some raw inputs to their desired targets; whereas the hypernetwork takes a set of inputs that contain information about the structure of the weights and generates the weight for that layer (see Figure 1). >| hy > wy W2 layer index and other information about the weight Figure 1: A hypernetwork generates the weights for a feedforward network. Black connections and parameters are associated the main network whereas orange connections and parameters are associated with the hypernetwork. HyperNEAT (Stanley et al., 2009) is an example of hypernetworks where the inputs are a set of virtual coordinates for each weight in the main network. In this work, we will focus on a more pow- erful approach where the input is an embedding vector that describes the entire weights of a given layer. Our embedding vectors can be fixed parameters that are also learned during end-to-end train- ing, allowing approximate weight-sharing within a layer and across layers of the main network.
1609.09106#0
1609.09106#2
1609.09106
[ "1603.09025" ]
1609.09106#2
HyperNetworks
In â Work done as a member of the Google Brain Residency program (g.co/brainresidency). addition, our embedding vectors can also be generated dynamically by our hypernetwork, allowing the weights of a recurrent network to change over timesteps and also adapt to the input sequence. We perform experiments to investigate the behaviors of hypernetworks in a range of contexts and find that hypernetworks mix well with other techniques such as batch normalization and layer nor- malization. Our main result is that hypernetworks can generate non-shared weights for LSTM that work better than the standard version of LSTM (Hochreiter & Schmidhuber, 1997). On language modelling tasks with Character Penn Treebank, Hutter Prize Wikipedia datasets, hypernetworks for LSTM achieve near state-of-the-art results. On a handwriting generation task with IAM handwrit- ing dataset, Hypernetworks for LSTM achieves high quantitative and qualitative results. On image classification with CIFAR-10, hypernetworks, when being used to generate weights for a deep con- vnet (LeCun et al., 1990), obtain respectable results compared to state-of-the-art models while hav- ing fewer learnable parameters. In addition to simple tasks, we show that Hypernetworks for LSTM offers an increase in performance for large, production-level neural machine translation models.
1609.09106#1
1609.09106#3
1609.09106
[ "1603.09025" ]
1609.09106#3
HyperNetworks
# 2 MOTIVATION AND RELATED WORK Our approach is inspired by methods in evolutionary computing, where it is difficult to directly operate in large search spaces consisting of millions of weight parameters. A more efficient method is to evolve a smaller network to generate the structure of weights for a larger network, so that the search is constrained within the much smaller weight space. An instance of this approach is the work on the HyperNEAT framework (Stanley et al., 2009). In the HyperNEAT framework, Compositional Pattern-Producing Networks (CPPNs) are evolved to define the weight structure of much larger main network. Closely related to our approach is a simplified variation of HyperNEAT, where the structure is fixed and the weights are evolved through Discrete Cosine Transform (DCT) is called Compressed Weight Search (Koutnik et al., 2010). Even more closely related to our approach are Differentiable Pattern Producing Networks (DPPNs), where the structure is evolved but the weights are learned (Fernando et al., 2016), and ACDC-Networks (Moczulski et al., 2015), where linear layers are compressed with DCT and the parameters are learned. Most reported results using these methods, however, are in small scales, perhaps because they are both slow to train and require heuristics to be efficient. The main difference between our approach and HyperNEAT is that hypernetworks in our approach are trained end-to-end with gradient descent together with the main network, and therefore are more efficient. In addition to end-to-end learning with gradient descent, our approach strikes a good balance be- tween Compressed Weight Search and HyperNEAT in terms of model flexibility and training sim- plicity. First, it can be argued that Discrete Cosine Transform used in Compressed Weight Search may be too simple and using the DCT prior may not be suitable for many problems. Second, even though HyperNEAT is more flexible, evolving both the architecture and the weights in HyperNEAT is often an overkill for most practical problems. Even before the work on HyperNEAT and DCT, Schmidhuber (1992; 1993) has suggested the con- cept of fast weights in which one network can produce context-dependent weight changes for a second network.
1609.09106#2
1609.09106#4
1609.09106
[ "1603.09025" ]
1609.09106#4
HyperNetworks
Small scale experiments were conducted to demonstrate fast weights for feed for- ward networks at the time, but perhaps due to the lack of modern computational tools, the recurrent network version was mentioned mainly as a thought experiment (Schmidhuber, 1993). A subse- quent work demonstrated practical applications of fast weights (Gomez & Schmidhuber, 2005), where a generator network is learnt through evolution to solve an artificial control problem. The concept of a network interacting with another network is central to the work of (Jaderberg et al., 2016; Andrychowicz et al., 2016), and especially (Denil et al., 2013; Yang et al., 2015; Bertinetto et al., 2016; De Brabandere et al., 2016), where certain parameters in a convolutional network are predicted by another network. These studies however did not explore the use of this approach to recurrent networks, which is a main contribution of our work. The focus of this work is to generate weights for practical architectures, such as convolutional net- works and recurrent networks by taking layer embedding vectors as inputs. However, our hypernet- works can also be utilized to generate weights for a fully connected network by taking coordinate information as inputs similar to DPPNs. Using this setting, hypernetworks can approximately re- cover the convolutional architecture without explicitly being told to do so, a similar result obtained by â Convolution by Evolution" (Fernando et al., 2016). This result is described in Appendix A.1. # 3 METHODS
1609.09106#3
1609.09106#5
1609.09106
[ "1603.09025" ]
1609.09106#5
HyperNetworks
In this paper, we view convolutional networks and recurrent networks as two ends of a spectrum. On one end, recurrent networks can be seen as imposing weight-sharing across layers, which makes them inflexible and difficult to learn due to vanishing gradient. On the other end, convolutional networks enjoy the flexibility of not having weight-sharing, at the expense of having redundant parameters when the networks are deep. Hypernetworks can be seen as a form of relaxed weight- sharing, and therefore strikes a balance between the two ends. See Appendix A.2 for conceptual diagrams of Static and Dynamic Hypernetworks. 3.1 STATIC HYPERNETWORK: A WEIGHT FACTORIZATION APPROACH FOR DEEP CONVOLUTIONAL NETWORKS First we will describe how we construct a hypernetwork for the purpose of generating the weights of a feedforward convolutional network. In a typical deep convolutional network, the majority of model parameters are in the kernels of convolutional layers. Each kernel contain Nj, x Nouz filters and each filter has dimensions f.;e < fsize.
1609.09106#4
1609.09106#6
1609.09106
[ "1603.09025" ]
1609.09106#6
HyperNetworks
Letâ s suppose that these parameters are stored in a matrix KJ ¢ RNinfsizexNourSsize for each layer 7 = 1,..,D, where D is the depth of the main convolutional network. For each layer j, the hypernetwork receives a layer embedding z/ â ¬ RY* as input and predicts Aâ , which can be generally written as follows: Ki =g(zâ ), Vj=1,..,D (dy We note that this matrix Aâ can be broken down as N;,, slices of a smaller matrix with dimensions fsize X Nout fsize, each Slice of the kernel is denoted as kK} â ¬ RfsizexNoutfsize Therefore, in our ap- proach, the hypernetwork is a two-layer linear network. The first layer of the hypernetwork takes the input vector z/ and linearly projects it into the N;,, inputs, with N;,, different matrices W; â ¬ RIXNz and bias vectors B; â ¬ IR¢, where d is the size of the hidden layer in the hypernetwork. For our pur- pose, we fix d to be equal to N, although they can be different. The final layer of the hypernetwork is a linear operation which takes an input vector a; of size d and linearly projects that into A; using acommon tensor Woy, â ¬ Rfsize*NoutSsizeX@ and bias matrix Bou, â ¬ Rfeize*Nouefsize, The final kernel K will be a concatenation of every K?. Thus g(z/) can be written as follows: a} = W;2z) + B;, Vi =1,.., Nin, Vj = 1,...,D K} = (Wow,a}) | + Bout, Vi=1,..,Nin, Vj =1,..,D (2) Ki=(K] Kho. K} . Kk,,), Vj =1,.,D In our formulation, the learnable parameters are W;, Bj, Wout, Bout together with all zâ â s.
1609.09106#5
1609.09106#7
1609.09106
[ "1603.09025" ]
1609.09106#7
HyperNetworks
During inference, the model simply takes the layer embeddings z/ learned during training to reproduce the kernel weights for layer 7 in the main convolutional network. As a side effect, the number of learnable parameters in hypernetwork will be much lower than the main convolutional network. In fact, the total number of learnable parameters in hypernetwork is N, x D +d x (Nz +1) x Ni + fsize X Nout X fsize X (d+ 1) compared to the D x Nin X fsize X Nout X fsize parameters for the kernels of the main convolutional network. Our approach of constructing g(.) is similar to the hierarchically semiseparable matrix approach proposed by Xia et al. (2010). Note that even though it seems redundant to have a two-layered linear hypernetwork as that is equivalent to a one-layered hypernetwork, the fact that Wou¢ and Bout are shared makes our two-layered hypernetwork more compact than a one-layered hypernetwork. More concretely, a one-layered hypernetwork would have N, x Nin X fsize X Nout X fsize learnable parameters which is usually much bigger than a two-layered hypernetwork does.
1609.09106#6
1609.09106#8
1609.09106
[ "1603.09025" ]
1609.09106#8
HyperNetworks
â Tensor dot product between W â ¬ Râ ¢*"â ¢* and a â ¬ R*. Result (W,a) â ¬ Râ ¢*â The above formulation assumes that the network architecture consists of kernels with same dimen- sions. In practice, deep convolutional network architectures consists of kernels of varying dimen- sions. Typically, in many designs, the kernel dimensions are integer multiples of a basic size. This is indeed the case in the residual network family of architectures (He et al., 2016a) that we will be experimenting with later is an example of such a design. In our experiments, although the kernels of a residual network do not share the same dimensions, the N; and N,,,; dimensions for each kernel are integer multiples of 16. To modify our approach to work with this architecture, we have our hypernetwork generate kernels for this basic size of 16, and if we require a larger kernel for a certain layer, we will concatenate multiple basic kernels together to form the larger kernel. K, Ky Ks kK) @) K32 x64 = ( Ks Kg Ky Ks For example, if we need to generate a kernel with N; = 32 and Nou: = 64, we will tile eight basic kernels together. Each basic kernel is generated by a unique z embedding, hence the larger kernel will be expressed with eight embeddings. Therefore, kernels that are larger in size will require a proportionally larger number of embedding vectors. For visualizations of concatenated kernels, please see Appendix A.2.1. Figure 2 shows the similarity between kernels learned by a ConvNet to classify MNIST digits and those learned by a hypernetwork generating weights for a ConvNet. Figure 2: Kernels learned by a ConvNet to classify MNIST digits (left). Kernels learned by a hypernetwork generating weights for the ConvNet (right). 3.2. DYNAMIC HYPERNETWORK: ADAPTIVE WEIGHT GENERATION FOR RECURRENT NETWORKS In the previous section, we outlined a procedure for using a hypernetwork to generate the weights for a deep convolutional network. In this section, we will use a recurrent network to dynamically gener- ate weights for another recurrent network, such that the weights can vary across many timesteps.
1609.09106#7
1609.09106#9
1609.09106
[ "1603.09025" ]
1609.09106#9
HyperNetworks
In this context, hypernetworks are called dynamic hypernetworks, and can be seen as a form of relaxed weight-sharing, a compromise between hard weight-sharing of traditional recurrent networks, and no weight-sharing of convolutional networks. This relaxed weight-sharing approach allows us to control the trade off between the number of model parameters and model expressiveness. Our dynamic hypernetworks can be used to generate weights for RNN and LSTM. When a hyper- network is used to generate the weights for an RNN, it is called HyperRNN. At every time step t, a HyperRNN takes as input the concatenated vector of input x; and the hidden states of the main RNN /;_1, it then generates as output the vector h,. This vector is then used to generate the weights for the main RNN at the same timestep. Both the HyperRNN and the main RNN are trained jointly with backpropagation and gradient descent. In the following, we will give a more formal description of the model. The standard formulation of a Basic RNN is given by: he = (Waht-1 + Wee + b) (4) where h; is the hidden state, @ is a non-linear operation such as tanh or relu, and the weight matrices and bias W;, â ¬ RN»*Ne, W, â ¬ RN*Ne bh © RN» is fixed each timestep for an input sequence X = (a, %2,...,27). Figure 3: An overview of HyperRNNs. Black connections and parameters are associated basic RNNs. Orange connections and parameters are introduced in this work and associated with Hyper- RNNs. Dotted arrows are for parameter generation. In HyperRNN, we allow W), and W,, to float over time by using a smaller hypernetwork to generate these parameters of the main RNN at each step (see Figure 3). More concretely, the parameters W,, Wz, b of the main RNN are different at different time steps, so that h; can now be computed as: hy = (Wr (zn )heâ a + Wi. (zx) + b(z)), where Wrhl(zn) = (Whz, Zn) W. (22) = (Woz, Zn) (20) = W220 + bo Where Wp, â ¬ RNa*NnXNe W, â
1609.09106#8
1609.09106#10
1609.09106
[ "1603.09025" ]
1609.09106#10
HyperNetworks
¬ RNaxNeXNe Wy, â ¬ RNaXN2 by © RNo and 2p, Zn, 22 â ¬ RY. We use a recurrent hypernetwork to compute z),, z,, and z, as a function of x; and hy_,: 5 (he a= ( 2) hy = b(Wyhe_1 + Wet + b) zn = Wj, lu-1 +b tn = Wy, hi-1 +6 ha 2 = Wayht-1 (6) huh he Where Wi, â ¬ RNA*Na, We â ¬ RNAX(Nn+N2) b © RNA, and Wj,,,Wi,,Wiy, â ¬ RN?*N* and ban» Ong â ¬ R=. This HyperRNN Cell has Nj, hidden units. Typically Nj, is much smaller than Nj. hh?
1609.09106#9
1609.09106#11
1609.09106
[ "1603.09025" ]
1609.09106#11
HyperNetworks
Pha As the embeddings z;,, z,, and z, are of dimensions N,, which is typically smaller than the hidden state size Nj; of the HyperRNN cell, a linear network is used to project the output of the HyperRNN cell into the embeddings in Equation 6. After the embeddings are computed, they will be used to generate the full weight matrix of the main RNN. The above is a general formulation of a /inear dynamic hypernetwork applied to RNNs. However, we found that in practice, Equation 5 is often not practical because the memory usage becomes too large for real problems. The amount of memory required in the system described in Equation 5 will be N, times the memory of a Basic RNN, which limits the number of hidden units we can use in many practical applications. We can modify the dynamic hypernetwork system described in Equation 5 so that it can be much more scalable and memory efficient. Our approach borrows from the static hypernetwork section and we will use an intermediate hidden vector d(z) â ¬ R%* to parametrize a weight matrix, where d(z) will be a linear projection of z. To dynamically modify a weight matrix W, we will allow each
1609.09106#10
1609.09106#12
1609.09106
[ "1603.09025" ]
1609.09106#12
HyperNetworks
(5) row of this weight matrix to be scaled linearly by an element in vector d. We refer d as a weight scaling vector. Below is the modification to W (z): do(z) Wo Wie)=w(a)) =| BOM (7) dy, (2)Wn, While we sacrifice the ability to construct an entire weight matrix from a linear combination of N, matrices of the same size, we are able to linearly scale the rows of a single matrix with N, degrees of freedom. We find this to be a good trade off, as this formulation of converting W(z) into W (d(z)) decreases the amount of memory required by the dynamic hypernetwork. Rather than requiring Nz times the memory of a Basic RNN, we will only be using memory in the order NV, times the number of hidden units, which is an acceptable amount of extra memory usage that is often available in many applications. In addition, the row-level operation in Equation 7 can be shown to be equivalent to an element-wise multiplication operator and hence computationally much more efficient in practice. Below is the more memory efficient version of the setup of Equation 5: hy = (dn (Zn) © Wrhe-1 + de(Zx) © Weve + b(z0)), where dn(2n) = Whz2h dy (22) = Waz%x b(zp) = Woz2n + bo (8) This formulation of the HyperRNN has some similarities to Recurrent Batch Normalization (Cooij- mans et al., 2016) and Layer Normalization (Ba et al., 2016). The central idea for the normalization techniques is to calculate the first two statistical moments of the inputs to the activation function, and to linearly scale the inputs to have zero mean and unit variance. An additional set of fixed parameters are learned to unscale the activations if required. This element-wise operation also has similarities to the Multiplicative RNN (Sutskever et al., 2011) and Multiplicative Integration RNN (Wu et al., 2016) where it was demonstrated that the multiplication-operation encouraged better gradient flow. Since the HyperRNN cell can indirectly modify the rows of each weight matrix and also the bias of the main RNN, it is implicitly also performing a linear scaling to the inputs of the activation function.
1609.09106#11
1609.09106#13
1609.09106
[ "1603.09025" ]
1609.09106#13
HyperNetworks
The difference here is that the linear scaling parameters can be different for each timestep and also for for each input sample. It will be interesting to compare the scaling policy that the HyperRNN cell comes up with, to the hand engineered statistical-moments based scaling approaches. In addition, we note that the existing normalization approaches can work together with the HyperRNN approach, where the HyperRNN cell will be tasked with discovering a better dynamical scaling policy to complement normalization. We will also explore this combination in our experiments. The Long Short-Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) is usually better than the Basic RNN at storing and retrieving information over longer time steps. In our ex- periments, we will focus on this LSTM version of the HyperRNN, called the HyperLSTM. The details of the HyperLSTM architecture is described in Appendix A.2.2, along with specific imple- mentation details in Appendix A.2.3. We want to know whether the HyperLSTM cell can learn a weight adjustment policy that can rival statistical moments-based normalization methods, hence Layer Normalization will be one of our baseline methods. We will therefore conduct experiments on two versions of HyperLSTM, one with and one without the application of Layer Normalization. # 4 EXPERIMENTS In the following experiments, we will benchmark the performance of static hypernetworks on im- age recognition with MNIST and CIFAR-10, and the performance of dynamic hypernetworks on language modelling with Penn Treebank and Hutter Prize Wikipedia (enwik8) datasets and hand- writing generation. 4.1 USING STATIC HYPERNETWORKS TO GENERATE FILTERS FOR CONVOLUTIONAL NETWORKS AND MNIST We start by applying a hypernetwork to generate the filters for a convolutional network on MNIST. Our main convolutional network is a small two layer network and the hypernetwork is used to gener- ate the kernel for the second layer (7x7x 16x16), which contains the bulk of the trainable parameters in the system. Our weight matrix will be summarized by an embedding of size N, = 4. See Appendix A.3.1 for further experimental setup details. For this task, the hypernetwork achieved a test accuracy of 99.24%, comparable to the 99.28% for the conventional method.
1609.09106#12
1609.09106#14
1609.09106
[ "1603.09025" ]
1609.09106#14
HyperNetworks
In this example, a kernel consisting of 12,544 weights is represented by an embedding vector of only 4 parameters, generated by a hypernetwork that has 4240 parameters. We can see the weight matrix this network produced by the hypernetwork in Figure 2. Now the question is whether we can also train a deep convolutional network, using a single hypernetwork generating a set of weights for each layer, on a dataset more challenging than MNIST. 4.2 STATIC HYPERNETWORKS FOR RESIDUAL NETWORK ARCHITECTURE AND CIFAR-10 The residual network architectures (He et al., 2016a; Zagoruyko & Komodakis, 2016) are popular for image recognition tasks, as they can accommodate very deep networks while maintaining effective gradient flow across layers using skip connections. The original resnet and subsequent derivatives (Zhang et al., 2016; Huang et al., 2016a) achieved state-of-the-art image recognition performance on a variety of public datasets. While residual networks can be be very deep, and in some experi- ments as deep as 1001 layers ((He et al., 2016b), it is important to understand whether some these layers share common properties and can be reduced effectively by introducing weight sharing. If we enforce weight-sharing across many layers of a deep feed forward network, the network may share many properties to that of a recurrent network. In this experiment, we want to explore this idea of enforcing relaxed weight sharing across all of the layers of a deep residual network. We will take a simple version of residual network, use a single hypernetwork to generate the weights of all of its layers for image classification task on the CIFAR-10 dataset. group name | output size block type conv 1 32 x 32 [3x3, 16] 3x3, 16xk conv2 32x32 3x3, 16xk N 3x3, 32xk conv3 16x16 3x3, 32xk N 3x3, 64xk conv4 8x8 3x3, 64xk Jos avg-pool 1x1 [8 x 8] Table 1: Structure of Wide Residual Networks in Zagoruyko & Komodakis (2016). N determines the number of residual blocks in each group.
1609.09106#13
1609.09106#15
1609.09106
[ "1603.09025" ]
1609.09106#15
HyperNetworks
Network width is determined by factor k. Our experiment will use a version of the wide residual network (Zagoruyko & Komodakis, 2016), described in Table 1, a popular and simple variant of the family of residual network architectures, and we will focus configurations (NV = 6, = 1) and(N = 6, K = 2), referred to as WRN 40-1 and WRN 40-2 respectively. In this setup, we will use a hypernetwork to generate all of the kernels in conv2, conv3, and conv4, so we will generate 36 layers of kernels in total. The WRN architecture uses a filter size of 3 for every kernel. We use the method outlined in the Methods section to deal with kernels of varying sizes, and use the an embedding size of N, = 64 in our experiments. See Appendix A.3.2 for further experimental setup details. We obtained similar classification accuracy numbers as reported in (Zagoruyko & Komodakis, 2016) with our own implementation. We also note that the weights generated by the hypernetwork are used in a batch normalization setting without modification to the original model. In principle, hypernet- works can also be applied to the newer variants of residual networks with more skip connections, such as DenseNets and ResNets of Resnets. From the results, we see that enforcing a relaxed weight sharing constraint to the deep residual network cost us ~ 1.25-1.5% in classification accuracy, while drastically reducing the number of
1609.09106#14
1609.09106#16
1609.09106
[ "1603.09025" ]
1609.09106#16
HyperNetworks
Model Test Error Param Count Network in Network (Lin et al., 2014) 8.81% FitNet (Romero et al., 2014) 8.39% Deeply Supervised Nets (Lee et al., 2015) 8.22% Highway Networks (Srivastava et al., 2015) 7.12% ELU (Clevert et al., 2015) 6.55% Original Resnet-110 (He et al., 2016a) 6.43% 17M Stochastic Depth Resnet-110 (Huang et al., 2016b) 5.23% 17M Wide Residual Network 40-1 (Zagoruyko & Komodakis, 2016) 6.85% 0.6M Wide Residual Network 40-2 (Zagoruyko & Komodakis, 2016) 5.33% 2.2M Wide Residual Network 28-10 (Zagoruyko & Komodakis, 2016) 4.17% 36.5 M ResNet of ResNet 58-4 (Zhang et al., 2016) 3.77% 13.3M DenseNet (Huang et al., 2016a) 3.74% 27.2M Wide Residual Network 40-1? 6.73% 0.563 M Hyper Residual Network 40-1 (ours) 8.02% 0.097 M Wide Residual Network 40-2? 5.66% 2.236 M Hyper Residual Network 40-2 (ours) 7.23% 0.148 M Table 2: CIFAR-10 Classification with hypernetwork generated weights. parameters in the model as a trade off. One reason for this reduction in accuracy is because different layers of a deep network is trained to extract different levels of features, and require different kinds of filters to perform optimally. The hypernetwork enforces some commonality between every layer, but offers each layer 64 degrees of freedom to distinguish itself from the other layers.
1609.09106#15
1609.09106#17
1609.09106
[ "1603.09025" ]