doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1701.05517 | 2 | # 1 INTRODUCTION
The PixelCNN, introduced by van den Oord et al. (2016b), is a generative model of images with a tractable likelihood. The model fully factorizes the probability density function on an image x over all its sub-pixels (color channels in a pixel) as p(x) = []; p(i|a<i). The conditional distributions p(x;|%<;) are parameterized by convolutional neural networks and all share parameters. The Pixel- CNN is a powerful model as the functional form of these conditionals is very flexible. In addition is computationally efficient as all conditionals can be evaluated in parallel on a GPU for an ob- served image x. Thanks to these properties, the PixelCNN represents the current state-of-the-art in generative modeling when evaluated in terms of log-likelihood. Besides being used for modeling images, the PixelCNN model was recently extended to model audio (van den Oord et al., 2016a), video (Kalchbrenner et al., 2016b) and text (Kalchbrenner et al., 2016a).
# it | 1701.05517#2 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 3 | # it
For use in our research, we developed our own internal implementation of PixelCNN and made a number of modifications to the base model to simplify its structure and improve its performance. We now release our implementation at https: //github.com/openai/pixel-âcnn, hoping that it will be useful to the broader community. Our modifications are discussed in Section 2, and evaluated experimentally in Section 3. State-of-the-art log-likelihood results confirm their useful- ness.
# 2 MODIFICATIONS TO PIXELCNN
We now describe the most important modifications we have made to the PixelCNN model archite- cure as described by van den Oord et al. (2016c). For complete details see our code release at https://github.com/openai/pixel-cnn.
2.1 DISCRETIZED LOGISTIC MIXTURE LIKELIHOOD
The standard PixelCNN model specifies the conditional distribution of a sub-pixel, or color channel of a pixel, as a full 256-way softmax. This gives the model a lot of flexibility, but it is also very costly in terms of memory. Moreover, it can make the gradients with respect to the network parameters | 1701.05517#3 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 4 | Under review as a conference paper at ICLR 2017 very sparse, especially early in training. With the standard parameterization, the model does not know that a value of 128 is close to a value of 127 or 129, and this relationship first has to be learned before the model can move on to higher level structures. In the extreme case where a particular sub-pixel value is never observed, the model will learn to assign it zero probability. This would be especially problematic for data with higher accuracy on the observed pixels than the usual 8 bits: In the extreme case where very high precision values are observed, the PixelCNN, in its current form, would require a prohibitive amount of memory and computation, while learning very slowly. We therefore propose a different mechanism for computing the conditional probability of the observed discretized pixel values. In our model, like in the VAE of Kingma et al. (2016), we assume there is a latent color intensity v with a continuous distribution, which is then rounded to its nearest 8-bit representation to give the observed sub-pixel value x. By choosing a simple continuous distribution for modeling v (like the logistic distribution as done by Kingma et al. (2016)) we obtain a smooth and memory efficient predictive distribution | 1701.05517#4 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 5 | continuous distribution for modeling v (like the logistic distribution as done by Kingma et al. (2016)) we obtain a smooth and memory efficient predictive distribution for ~. Here, we take this continuous univariate distribution to be a mixture of logistic distributions which allows us to easily calculate the probability on the observed discretized value x, as shown in equation (2). For all sub-pixel values x excepting the edge cases 0 and 255 we have: K vow SP milogistic(u:, 8) (1) i=l K P(alr,p,s) = Soni [o((@ + 0.5 â pi) /8;) â o((a â 0.5 â pi) /s:)] , (2) i=l where o() is the logistic sigmoid function. For the edge case of 0, replace x â 0.5 by âoo, and for 255 replace x + 0.5 by +00. Our provided code contains a numerically stable implementation for calculating the log of the probability in equation 2. Our approach follows earlier work using continuous mixture models (Domke et al., 2008; Theis et al., 2012; Uria et al., 2013; Theis & Bethge, 2015), but | 1701.05517#5 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 6 | work using continuous mixture models (Domke et al., 2008; Theis et al., 2012; Uria et al., 2013; Theis & Bethge, 2015), but avoids allocating probability mass to values outside the valid range of [0,255] by explicitly modeling the rounding of v to x. In addi- tion, we naturally assign higher probability to the edge values 0 and 255 than to their neighboring values, which corresponds well with the observed data distribution as shown in Figure |. Experi- mentally, we find that only a relatively small number of mixture components, say 5, is needed to accurately model the conditional distributions of the pixels. The output of our network is thus of much lower dimension, yielding much denser gradients of the loss with respect to our parameters. In our experiments this greatly sped up convergence during optimization, especially early on in train- ing. However, due to the other changes in our architecture compared to that of van den Oord et al. (2016c) we cannot say with certainty that this would also apply to the original PixelCNN model. 0.010 0.008 0.006 frequency 100 150 sub-pixel value Figure 1: Marginal distribution of all sub-pixel values in | 1701.05517#6 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 9 | The pixels in a color image consist of three real numbers, giving the intensities of the red, blue and green colors. The original PixelCNN factorizes the generative model over these 3 sub-pixels. This allows for very general dependency structure, but it also complicates the model: besides keeping track of the spatial location of feature maps, we now have to separate out all feature maps in 3 groups depending on whether or not they can see the R/G/B sub-pixel of the current location. This added complexity seems to be unnecessary as the dependencies between the color channels of a pixel are likely to be relatively simple and do not require a deep network to model. Therefore, we instead condition only on whole pixels up and to the left in an image, and output joint predictive distributions over all 3 channels of a predicted pixel. The predictive distribution on a pixel itself can be interpreted as a simple factorized model: We first predict the red channel using a discretized mixture of logistics as described in section 2.1. Next, we predict the green channel using a predictive distribution of the same form. Here we allow the means of the mixture components to linearly depend on the value of the red sub-pixel. Finally, we model the blue channel in the same way, | 1701.05517#9 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 10 | form. Here we allow the means of the mixture components to linearly depend on the value of the red sub-pixel. Finally, we model the blue channel in the same way, where we again only allow linear dependency on the red and green channels. For the pixel (jj, 9i,;, bij) at location (i, 7) in our image, the distribution conditional on the context C;,;, consisting of the mixture indicator and the previous pixels, is thus | 1701.05517#10 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 12 | with a, 8, y scalar coefficients depending on the mixture component and previous pixels.
The mixture indicator is shared across all 3 channels; i.e. our generative model first samples a mix- ture indicator for a pixel, and then samples the color channels one-by-one from the corresponding mixture component. Had we used a discretized mixture of univariate Gaussians for the sub-pixels, instead of logistics, this would have been exactly equivalent to predicting the complete pixel using a (discretized) mixture of 3-dimensional Gaussians with full covariance. The logistic and Gaus- sian distributions are very similar, so this is indeed very close to what we end up doing. For full implementation details we refer to our code at https: //github.com/openai/pixel-cnn.
2.3. DOWNSAMPLING VERSUS DILATED CONVOLUTION | 1701.05517#12 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 13 | The original PixelCNN only uses convolutions with small receptive field. Such convolutions are good at capturing local dependencies, but not necessarily at modeling long range structure. Al- though we find that capturing these short range dependencies is often enough for obtaining very good log-likelihood scores (see Table 2), explicitly encouraging the model to capture long range dependencies can improve the perceptual quality of generated images (compare Figure 3 and Fig- ure 5). One way of allowing the network to model structure at multiple resolutions is to introduce dilated convolutions into the model, as proposed by van den Oord et al. (2016a) and Kalchbren- ner et al. (2016b). Here, we instead propose to use downsampling by using convolutions of stride 2. Downsampling accomplishes the same multi-resolution processing afforded by dilated convo- lutions, but at a reduced computational cost: where dilated convolutions operate on input of ever increasing size (due to zero padding), downsampling reduces the input size by a factor of 4 (for stride of 2 in 2 dimensions) at every downsampling. The downside of using downsampling is that it loses information, but we can compensate for this by introducing additional short-cut connections into the network as explained in the next section. With these additional short-cut connections, we found the performance of downsampling to be the same as for dilated convolution. | 1701.05517#13 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 14 | 2.4 ADDING SHORT-CUT CONNECTIONS
For input of size 32 x 32 our suggested model consists of 6 blocks of 5 ResNet layers. In between the first and second block, as well as the second and third block, we perform subsampling by strided convolution. In between the fourth and fifth block, as well as the fifth and sixth block, we perform upsampling by transposed strided convolution. This subsampling and upsampling process loses information, and we therefore introduce additional short-cut connections into the model to recover | 1701.05517#14 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 15 | Under review as a conference paper at ICLR 2017 this information from lower layers in the model. The short-cut connections run from the ResNet layers in the first block to the corresponding layers in the sixth block, and similarly between blocks two and five, and blocks three and four. This structure resembles the VAE model with top down inference used by Kingma et al. (2016), as well as the U-net used by Ronneberger et al. (2015) for image segmentation. Figure 2 shows our model structure graphically. 32x32 16x16 8x8 8x8 16x16 32x32 8 = Sequence of 6 layers [1 = bounmard steam tii [1 = Downward anc : rightward stream WwW. sens > = Identity skip) connection â> = Convolutional connection Figure 2: Like van den Oord et al. (2016c), our model follows a two-stream (downward, and downward+rightward) convolutional architecture with residual connections; however, there are two significant differences in connectivity. First, our architecture incorporates downsampling and up- sampling, such that the inner parts of the network operate over larger spatial scale, increasing com- putational efficiency. Second, we employ | 1701.05517#15 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 16 | incorporates downsampling and up- sampling, such that the inner parts of the network operate over larger spatial scale, increasing com- putational efficiency. Second, we employ long-range skip-connections, such that each k-th layer provides a direct input to the (AC â k)-th layer, where KC is the total number of layers in the net- work. The network is grouped into sequences of six layers, where most sequences are separated by downsampling or upsampling. 2.5 REGULARIZATION USING DROPOUT The PixelCNN model is powerful enough to overfit on training data. Moreover, rather than just reproducing the training images, we find that overfitted models generate images of low perceptual quality, as shown in Figure 8. One effective way of regularizing neural networks is dropout (Srivas- tava et al., 2014). For our model, we apply standard binary dropout on the residual path after the first convolution. This is similar to how dropout is applied in the wide residual networks of Zagoruyko & Komodakis (2016). Using dropout allows us to successfully train high capacity models while avoiding overfitting and producing high quality generations (compare figure 8 and figure 3). 3 EXPERIMENTS We apply our model to modeling natural images in the | 1701.05517#16 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 17 | train high capacity models while avoiding overfitting and producing high quality generations (compare figure 8 and figure 3). 3 EXPERIMENTS We apply our model to modeling natural images in the CIFAR-10 data set. We achieve state-of-the- art results in terms of log-likelihood, and generate images with coherent global structure. 3.1 UNCONDITIONAL GENERATION ON CIFAR-10 We apply our PixelCNN model, with the modifications as described above, to generative modeling of the images in the CIFAR-10 data set. For the encoding part of the PixelCNN, the model uses 3 Resnet blocks consisting of 5 residual layers, with 2 x 2 downsampling in between. The same architecture is used for the decoding part of the model, but with upsampling instead of downsampling in between blocks. All residual layers use 192 feature maps and a dropout rate of 0.5. Table | shows the state- of-the-art test log-likelihood obtained by our model. Figure 3 shows some samples generated by the model. | 1701.05517#17 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 18 | Under review as a conference paper at ICLR 2017 Figure 3: Samples from our PixelCNN model trained on CIFAR-10. Model Bits per sub-pixel Deep Diffusion (Sohl-Dickstein et al., 2015) 5.40 NICE (Dinh et al., 2014) 4.48 DRAW (Gregor et al., 2015) 4.13 Deep GMMs (van den Oord & Dambre, 2015) 4.00. Conv DRAW (Gregor et al., 2016) 3.58 Real NVP (Dinh et al., 2016) 3.49 PixelCNN (van den Oord et al., 2016b) 3.14 VAE with IAF (Kingma et al., 2016) 3.11 Gated PixelCNN (van den Oord et al.,2016c) 3.03 PixelRNN (van den Oord et al., 2016b) 3.00 PixelCNN++ 2.92 Table 1: Negative log-likelihood for generative models on CIFAR-10 expressed as bits per sub-pixel. 3.2 CLASS-CONDITIONAL GENERATION Next, we follow van den Oord et al. (2016c) in making our generative model conditional on the class-label of the CIFAR-10 images. This is done by linearly projecting a one-hot encoding of the class-label into a | 1701.05517#18 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 19 | generative model conditional on the class-label of the CIFAR-10 images. This is done by linearly projecting a one-hot encoding of the class-label into a separate class-dependent bias vector for each convolutional unit in our network. We find that making the model class-conditional makes it harder to avoid overfitting on the training data: our best test log-likelihood is 2.94 in this case. Figure 4 shows samples from the class-conditional model, with columns 1-10 corresponding the 10 classes in CIFAR-10. The images clearly look qualitatively different across the columns and for a number of them we can clearly identify their class label. | 1701.05517#19 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 20 | Model Bits per sub-pixel Deep Diffusion (Sohl-Dickstein et al., 2015) 5.40 NICE (Dinh et al., 2014) 4.48 DRAW (Gregor et al., 2015) 4.13 Deep GMMs (van den Oord & Dambre, 2015) 4.00. Conv DRAW (Gregor et al., 2016) 3.58 Real NVP (Dinh et al., 2016) 3.49 PixelCNN (van den Oord et al., 2016b) 3.14 VAE with IAF (Kingma et al., 2016) 3.11 Gated PixelCNN (van den Oord et al.,2016c) 3.03 PixelRNN (van den Oord et al., 2016b) 3.00 PixelCNN++ 2.92 | 1701.05517#20 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 21 | Under review as a conference paper at ICLR 2017 oe ee eke Boling 1 PT RARE a a we aR es.. Figure 4: Class-conditional samples from our PixelCNN for CIFAR-10 (left) and real CIFAR-10 images for comparison (right). 3.3. EXAMINING NETWORK DEPTH AND FIELD OF VIEW SIZE It is hypothesized that the size of the receptive field and additionally the removal of blind spots in the receptive field are important for PixelCNNâs performance (van den Oord et al., 2016b). Indeed van den Oord et al. (2016c) specifically introduced an improvement over the previous PixelCNN model to remove the blind spot in the receptive field that was present in their earlier model. Here we present the surprising finding that in fact a PixelCNN with rather small receptive field can attain competitive generative modelling performance on CIFAR-10 as long as it has enough capacity. Specifically, we experimented with our proposed PixelCNN++ model without downsampling blocks and reduce the number of layers to limit the receptive field size. We investigate two receptive field sizes: 11x5 and 15x8, and a receptive field size of 11x5, for example, means that the conditional distribution of a pixel can depends on a | 1701.05517#21 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 22 | 11x5 and 15x8, and a receptive field size of 11x5, for example, means that the conditional distribution of a pixel can depends on a rectangle above the pixel of size 11x5 as well as uot = 5x1 block to the left of the pixel. As we limit the size of the receptive field, the capacity of the network also drops significantly since it contains many fewer layers than a normal PixelCNN. We call the type of PixelCNN thatâs simply limited in depth âPlainâ Small PixelCNN. Interestingly, this model already has better performance than the original PixelCNN in van den Oord et al. (2016b) which had a blind spot. To increase capacity, we introduced two simple variants that make Small PixelCNN more expressive without growing the receptive field: e NIN (Network in Network): insert additional gated ResNet blocks with 1x1 convolution be- tween regular convolution blocks that grow receptive field. In this experiment, we inserted 3 NIN blocks between every other layer. e Autoregressive Channel: skip connections between sets of channels via 1x1 convolution gated ResNet block. Both modifications increase the capacity of the network, resulting in improved log-likelihood as shown in Table 2. Although the model with small receptive field | 1701.05517#22 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 24 | Under review as a conference paper at ICLR 2017 Table 2: CIFAR-10 bits per sub-pixel for Small PixelCNN Model Bits per sub-pixel Field=11x5, Plain 3.11 Field=11x5, NIN 3.09 Field=11x5, Autoregressive Channel 3.07 Field=15x8, Plain 3.07 Field=15x8, NIN 3.04 Field=15x8, Autoregressive Channel 3.03 eth CL Reda | BE SR ke Deh. Eee ee 2 Figure 5: Samples from 3.03 bits/dim Small PixelCNN 3.4 ABLATION EXPERIMENTS In order to test the effect of our modifications to PixelCNN, we run a number of ablation experiments where for each experiment we remove a specific modification. 3.4.1 SOFTMAX LIKELIHOOD INSTEAD OF DISCRETIZED LOGISTIC MIXTURE In order to test the contribution of our logistic mixture likelihood, we re-run our CIFAR-10 experi- ment with the 256-way softmax as the output distribution instead. We allow the 256 logits for each sub-pixel to linearly depend on the observed value of previous sub-pixels, with coefficients that are given as output by the model. Our model with softmax likelihood | 1701.05517#24 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 25 | each sub-pixel to linearly depend on the observed value of previous sub-pixels, with coefficients that are given as output by the model. Our model with softmax likelihood is thus strictly more flexible than our model with logistic mixture likelihood, although the parameterization is quite different from that used by van den Oord et al. (2016c). The model now outputs 1536 numbers per pixel, describing the logits on the 256 potential values for each sub-pixel, as well as the coefficients for the dependencies between the sub-pixels. Figure 6 shows that this model trains more slowly than our original model. In addition, the running time per epoch is significantly longer for our tensorflow implementation. For our architecture, the logistic mixture model thus clearly performs better. Since our architecture differs from that of van den Oord et al. (2016c) in other ways as well, we cannot say whether this would also apply to their model. 3.4.2 CONTINUOUS MIXTURE LIKELIHOOD INSTEAD OF DISCRETIZATION Instead of directly modeling the discrete pixel values in an image, it is also possible to de-quantize them by adding noise from the standard uniform distribution, as used by Uria et al. | 1701.05517#25 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 26 | the discrete pixel values in an image, it is also possible to de-quantize them by adding noise from the standard uniform distribution, as used by Uria et al. (2013) and others, and modeling the data as being continuous. The resulting model can be interpreted as a variational autoencoder (Kingma & Welling, 2013; Rezende et al., 2014), where the dequantized pixels z form a latent code whose prior distribution is captured by our model. Since the original discrete pixels x can be perfectly reconstructed from z under this model, the usual reconstruction term vanishes from | 1701.05517#26 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 28 | Under review as a conference paper at ICLR 2017 â original 48 â_ softmax likelihood bits per dim 40 2 a4 6 6 WwW wm =m 6 18 epochs Figure 6: Training curves for our model with logistic mixture likelihood versus our model with softmax likelihood. the variational lower bound. The entropy of the standard uniform distribution is zero, so the term that remains is the log likelihood of the dequantized pixels, which thus gives us a variational lower bound on the log likelihood of our original data. We re-run our model for CIFAR-10 using the same model settings as those used for the 2.92 bits per dimension result in Table 1, but now we remove the discretization in our likelihood model and instead add standard uniform noise to the image data. The resulting model is a continuous mixture model in the same class as that used by Theis et al. (2012); Uria et al. (2013); Theis & Bethge (2015) and others. After optimization, this model gives a variational lower bound on the data log likelihood of 3.11 bits per dimension. The difference with the reported 2.92 bits per dimension shows the benefit of using discretization in the likelihood model. 3.4.3. NO SHORT-CUT CONNECTIONS | 1701.05517#28 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 29 | difference with the reported 2.92 bits per dimension shows the benefit of using discretization in the likelihood model. 3.4.3. NO SHORT-CUT CONNECTIONS Next, we test the importance of the additional parallel short-cut connections in our model, indicated by the dotted lines in Figure 2. We re-run our unconditional CIFAR-10 experiment, but remove the short-cut connections from the model. As seen in Figure 7, the model fails to train without these connections. The reason for needing these extra short-cuts is likely to be our use of sub-sampling, which discards information that otherwise cannot easily be recovered, 60 â original â no short-cuts bits per dim 10 20 30 40 50 epochs Figure 7: Training curves for our model with and without short-cut connections. 3.4.4 NO DROPOUT We re-run our CIFAR-10 model without dropout regularization. The log-likelihood we achieve on the training set is below 2.0 bits per sub-pixel, but the final test log-likelihood is above 6.0 bits per | 1701.05517#29 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 30 | Under review as a conference paper at ICLR 2017 sub-pixel. At no point during training does the unregularized model get a test-set log-likelihood below 3.0 bits per sub-pixel. Contrary to what we might naively expect, the perceptual quality of the generated images by the overfitted model is not great, as shown in Figure 8. Figure 8: Samples from intentionally overfitted PixelCNN model trained on CIFAR-10, with train log-likelihood of 2.0 bits per dimension: Overfitting does not result in great perceptual quality. 4 CONCLUSION We presented PixelCNN++, a modification of PixelCNN using a discretized logistic mixture like- lihood on the pixels among other modifications. We demonstrated the usefulness of these mod- ifications with state-of-the-art results on CIFAR-10. Our code is made available at https: //github.com/openai/pixel-cnn and can easily be adapted for use on other data sets. REFERENCES Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components esti- mation. arXiv preprint arXiv: 1410.8516, 2014. Laurent Dinh, Jascha Sohl-Dickstein, and Samy | 1701.05517#30 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 31 | mation. arXiv preprint arXiv: 1410.8516, 2014. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv: 1605.08803, 2016. Justin Domke, Alap Karapurkar, and Yiannis Aloimonos. Who killed the directed model? In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pp. 1-8. IEEE, 2008. Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. Draw: A recurrent neural network for image generation. In Proceedings of the 32nd International Conference on Machine Learning, 2015. Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. Towards conceptual compression. arXiv preprint arXiv: 1604.08772, 2016. Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv: | 1701.05517#31 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 32 | Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv: 1610.10099, 20 16a. Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv: 1610.00527, 2016b. Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. Proceedings of the 2nd International Conference on Learning Representations, 2013. | 1701.05517#32 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 33 | Under review as a conference paper at ICLR 2017
Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improving variational inference with inverse autoregressive flow. In Advances in Neural Informa- tion Processing Systems, 2016.
Danilo J Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi- mate inference in deep generative models. In JCML, pp. 1278-1286, 2014.
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomed- ical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234-241. Springer, 2015.
Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsuper- vised learning using nonequilibrium thermodynamics. In Proceedings of the 32nd International Conference on Machine Learning, 2015. | 1701.05517#33 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 34 | Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958, 2014.
Lucas Theis and Matthias Bethge. Generative image modeling using spatial Istms. In Advances in Neural Information Processing Systems, pp. 1927-1935, 2015.
Lucas Theis, Reshad Hosseini, and Matthias Bethge. Mixtures of conditional gaussian scale mix- tures applied to multiscale image representations. PloS one, 7(7):e39857, 2012.
Benigno Uria, Iain Murray, and Hugo Larochelle. Rnade: The real-valued neural autoregressive density-estimator. In Advances in Neural Information Processing Systems, pp. 2175-2183, 2013.
Aaron van den Oord and Joni Dambre. Locally-connected transformations for deep gmms. In International Conference on Machine Learning (ICML) : Deep learning Workshop, 2015. | 1701.05517#34 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.05517 | 35 | Aaron van den Oord and Joni Dambre. Locally-connected transformations for deep gmms. In International Conference on Machine Learning (ICML) : Deep learning Workshop, 2015.
Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv: 1609.03499, 2016a.
Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In International Conference on Machine Learning (ICML), 2016b.
Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Ko- ray Kavukcuoglu. Conditional image generation with pixelenn decoders. arXiv preprint arXiv: 1606.05328, 2016c.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv: 1605.07146, 2016.
10 | 1701.05517#35 | PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications | PixelCNNs are a recently proposed class of powerful generative models with
tractable likelihood. Here we discuss our implementation of PixelCNNs which we
make available at https://github.com/openai/pixel-cnn. Our implementation
contains a number of modifications to the original model that both simplify its
structure and improve its performance. 1) We use a discretized logistic mixture
likelihood on the pixels, rather than a 256-way softmax, which we find to speed
up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,
simplifying the model structure. 3) We use downsampling to efficiently capture
structure at multiple resolutions. 4) We introduce additional short-cut
connections to further speed up optimization. 5) We regularize the model using
dropout. Finally, we present state-of-the-art log likelihood results on
CIFAR-10 to demonstrate the usefulness of these modifications. | http://arxiv.org/pdf/1701.05517 | Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma | cs.LG, stat.ML | null | null | cs.LG | 20170119 | 20170119 | [] |
1701.01036 | 1 | [email protected] [email protected] [email protected] [email protected]
# Abstract
Neural Style Transfer [Gatys et al., 2016] has re- cently demonstrated very exciting results which catches eyes in both academia and industry. De- spite the amazing results, the principle of neural style transfer, especially why the Gram matrices could represent style remains unclear. In this pa- per, we propose a novel interpretation of neural style transfer by treating it as a domain adapta- tion problem. Speciï¬cally, we theoretically show that matching the Gram matrices of feature maps is equivalent to minimize the Maximum Mean Dis- crepancy (MMD) with the second order polynomial kernel. Thus, we argue that the essence of neu- ral style transfer is to match the feature distribu- tions between the style images and the generated images. To further support our standpoint, we ex- periment with several other distribution alignment methods, and achieve appealing results. We believe this novel interpretation connects these two impor- tant research ï¬elds, and could enlighten future re- searches.
why Gram matrix can represent artistic style still remains a mystery. | 1701.01036#1 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 2 | why Gram matrix can represent artistic style still remains a mystery.
In this paper, we propose a novel interpretation of neu- ral style transfer by casting it as a special domain adapta- tion [Beijbom, 2012; Patel et al., 2015] problem. We theo- retically prove that matching the Gram matrices of the neural activations can be seen as minimizing a speciï¬c Maximum Mean Discrepancy (MMD) [Gretton et al., 2012a]. This re- veals that neural style transfer is intrinsically a process of dis- tribution alignment of the neural activations between images. Based on this illuminating analysis, we also experiment with other distribution alignment methods, including MMD with different kernels and a simpliï¬ed moment matching method. These methods achieve diverse but all reasonable style trans- fer results. Speciï¬cally, a transfer method by MMD with lin- ear kernel achieves comparable visual results yet with a lower complexity. Thus, the second order interaction in Gram ma- trix is not a must for style transfer. Our interpretation pro- vides a promising direction to design style transfer methods with different visual results. To summarize, our contributions are shown as follows: | 1701.01036#2 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 3 | 1 Introduction Transferring the style from one image to another image is an interesting yet difï¬cult problem. There have been many efforts to develop efï¬cient methods for automatic style transfer [Hertzmann et al., 2001; Efros and Freeman, 2001; Efros and Leung, 1999; Shih et al., 2014; Kwatra et al., 2005]. Recently, Gatys et al. proposed a seminal work [Gatys et al., 2016]: It captures the style of artistic images and transfer it to other images using Convolutional Neural Net- works (CNN). This work formulated the problem as ï¬nd- ing an image that matching both the content and style statis- tics based on the neural activations of each layer in CNN. It achieved impressive results and several follow-up works im- proved upon this innovative approaches [Johnson et al., 2016; Ulyanov et al., 2016; Ruder et al., 2016; Ledig et al., 2016]. Despite the fact that this work has drawn lots of attention, the fundamental element of style representation: the Gram ma- trix in [Gatys et al., 2016] is not fully explained. The reason
1. First, we demonstrate that matching Gram matrices in neural style transfer [Gatys et al., 2016] can be reformu- lated as minimizing MMD with the second order poly- nomial kernel. | 1701.01036#3 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 4 | 2. Second, we extend the original neural style transfer with different distribution alignment methods based on our novel interpretation.
2 Related Work In this section, we brieï¬y review some closely related works and the key concept MMD in our interpretation.
Style Transfer Style transfer is an active topic in both academia and industry. Traditional methods mainly focus on the non-parametric patch-based texture synthesis and transfer, which resamples pixels or patches from the original source texture images [Hertzmann et al., 2001; Efros and Freeman, 2001; Efros and Leung, 1999; Liang et al., 2001]. Different methods were proposed to improve the quality of the patch- based synthesis and constrain the structure of the target im- age. For example, the image quilting algorithm based on dynamic programming was proposed to ï¬nd optimal texture
âCorresponding author
boundaries in [Efros and Freeman, 2001]. A Markov Random Field (MRF) was exploited to preserve global texture struc- tures in [Frigo et al., 2016]. However, these non-parametric methods suffer from a fundamental limitation that they only use the low-level features of the images for transfer. | 1701.01036#4 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 5 | Recently, neural style transfer [Gatys et al., 2016] has It demonstrated remarkable results for image stylization. fully takes the advantage of the powerful representation of Deep Convolutional Neural Networks (CNN). This method used Gram matrices of the neural activations from different layers of a CNN to represent the artistic style of a image. Then it used an iterative optimization method to generate a new image from white noise by matching the neural activa- tions with the content image and the Gram matrices with the style image. This novel technique attracts many follow-up works for different aspects of improvements and applications. To speed up the iterative optimization process in [Gatys et al., 2016], Johnson et al. [Johnson et al., 2016] and Ulyanov et al. [Ulyanov et al., 2016] trained a feed-forward generative network for fast neural style transfer. To improve the trans- fer results in [Gatys et al., 2016], different complementary schemes are proposed, including spatial constraints [Selim et al., 2016], semantic guidance [Champandard, 2016] and Markov Random Field (MRF) prior [Li and Wand, 2016]. There are also some extension works to apply neural | 1701.01036#5 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 6 | guidance [Champandard, 2016] and Markov Random Field (MRF) prior [Li and Wand, 2016]. There are also some extension works to apply neural style transfer to other applications. Ruder et al. [Ruder et al., 2016] incorporated temporal consistence terms by penaliz- ing deviations between frames for video style transfer. Selim et al. [Selim et al., 2016] proposed novel spatial constraints through gain map for portrait painting transfer. Although these methods further improve over the original neural style transfer, they all ignore the fundamental question in neural style transfer: Why could the Gram matrices represent the artistic style? This vagueness of the understanding limits the further research on the neural style transfer. | 1701.01036#6 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 7 | Domain Adaptation Domain adaptation belongs to the area of transfer learning [Pan and Yang, 2010]. It aims to transfer the model that is learned on the source domain to the unlabeled target domain. The key component of domain adaptation is to measure and minimize the difference between source and target distributions. The most common discrep- ancy metric is Maximum Mean Discrepancy (MMD) [Gret- ton et al., 2012a], which measure the difference of sample mean in a Reproducing Kernel Hilbert Space. It is a popu- lar choice in domain adaptation works [Tzeng et al., 2014; Long et al., 2015; Long et al., 2016]. Besides MMD, Sun et al. [Sun et al., 2016] aligned the second order statistics by whitening the data in source domain and then re-correlating to the target domain. In [Li et al., 2017], Li et al. proposed a parameter-free deep adaptation method by simply modulating the statistics in all Batch Normalization (BN) layers. | 1701.01036#7 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 8 | Maximum Mean Discrepancy Suppose there are two sets of samples X = {xi}n i=1 and Y = {yj}m j=1 where xi and yj are generated from distributions p and q, respectively. Maxi- mum Mean Discrepancy (MMD) is a popular test statistic for the two-sample testing problem, where acceptance or rejec- tion decisions are made for a null hypothesis p = q [Gretton
et al., 2012a]. Since the population MMD vanishes if and only p = q, the MMD statistic can be used to measure the difference between two distributions. Speciï¬cally, we calcu- lates MMD deï¬ned by the difference between the mean em- bedding on the two sets of samples. Formally, the squared MMD is deï¬ned as:
MMD?[X, Y] = ||E.[4(x)] - E,[4(y)]I)? I >> 608) - = 6) I" i=1 j=l non mom () = SY 060)" O(xir) + 5 5 ow)" o(y;) i=1 i/=1 j=l j/=1 nom = Sy (xi) T o(yi)s i=1 j=l | 1701.01036#8 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 9 | where ¢(-) is the explicit feature mapping function of MMD. Applying the associated kernel function k(x,y) = (o(x), o(y)), the Eq. |1 [ijcan be expressed in the form of ker- nel:
MMD°[X, Y] non mom = SEE Xi, Xi) a PL yjâ) i=1/=1 =1 (2) Fm Ms yj):
The kernel function k(·, ·) implicitly deï¬nes a mapping to a higher dimensional feature space.
3 Understanding Neural Style Transfer In this section, we ï¬rst theoretically demonstrate that match- ing Gram matrices is equivalent to minimizing a speciï¬c form of MMD. Then based on this interpretation, we extend the original neural style transfer with different distribution align- ment methods. | 1701.01036#9 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 10 | Before explaining our observation, we ï¬rst brieï¬y re- view the original neural style transfer approach [Gatys et al., 2016]. The goal of style transfer is to generate a stylized im- age xâ given a content image xc and a reference style im- age xs. The feature maps of xâ, xc and xs in the layer l of a CNN are denoted by Fl â RNlÃMl , Pl â RNlÃMl and Sl â RNlÃMl respectively, where Nl is the number of the feature maps in the layer l and Ml is the height times the width of the feature map.
In [Gatys et al., 2016], neural style transfer iteratively gen- erates xâ by optimizing a content loss and a style loss:
L = αLcontent + βLstyle, (3)
where α and β are the weights for content and style losses, Lcontent is deï¬ned by the squared error between the feature maps of a speciï¬c layer l for xâ and xc:
Ni M Leontent = >> VE *, (4) 2a
(2)
and Lstyle is the sum of several style loss Ll layers: style in different
Lstyle = wlLl style, (5) l | 1701.01036#10 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 11 | (2)
and Lstyle is the sum of several style loss Ll layers: style in different
Lstyle = wlLl style, (5) l
where wl is the weight of the loss in the layer l and Ll style is deï¬ned by the squared error between the features correlations expressed by Gram matrices of xâ and xs:
NN Ltyle = INTE > SiG; - > (6) i=1 j=l
where the Gram matrix Gl â RNlÃNl is the inner product between the vectorized feature maps of xâ in layer l:
M = FF ie: (7) k=1
and similarly Al is the Gram matrix corresponding to Sl.
3.1 Reformulation of the Style Loss In this section, we reformulated the style loss Lstyle in Eq. 6. By expanding the Gram matrix in Eq. 6, we can get the for- mulation of Eq. 8, where f l ·k is the k-th column of Fl ·k and sl and Sl.
By using the second order degree polynomial kernel k(x, y) = (xT y)2, Eq. 8 can be represented as: | 1701.01036#11 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 12 | By using the second order degree polynomial kernel k(x, y) = (xT y)2, Eq. 8 can be represented as:
M M Cote = nae » » GG ky=1k2=1 + k(s!p, .8!.,) â 2e(E4, s',,)) ) l whey? a) L 27zl ol IN? MMD*|F', Sâ],
where F l is the feature set of xâ where each sample is a col- umn of Fl, and S l corresponds to the style image xs. In this way, the activations at each position of feature maps is con- sidered as an individual sample. Consequently, the style loss ignores the positions of the features, which is desired for style transfer. In conclusion, the above reformulations suggest two important ï¬ndings:
1. The style of a image can be intrinsically represented by feature distributions in different layers of a CNN.
2. The style transfer can be seen as a distribution alignment process from the content image to the style image.
# 3.2 Different Adaptation Methods for Neural Style Transfer | 1701.01036#12 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 13 | 2. The style transfer can be seen as a distribution alignment process from the content image to the style image.
# 3.2 Different Adaptation Methods for Neural Style Transfer
Our interpretation reveals that neural style transfer can be seen as a problem of distribution alignment, which is also at the core in domain adaptation. If we consider the style of one image in a certain layer of CNN as a âdomainâ, style trans- fer can also be seen as a special domain adaptation problem. The specialty of this problem lies in that we treat the feature at each position of feature map as one individual data sam- ple, instead of that in traditional domain adaptation problem
in which we treat each image as one data sample. (e.g. The feature map of the last convolutional layer in VGG-19 model is of size 14 Ã 14, then we have totally 196 samples in this âdomainâ.)
Inspired by the studies of domain adaptation, we extend neural style transfer with different adaptation methods in this subsection.
MMD with Different Kernel Functions As shown in Eq. 9, matching Gram matrices in neural style transfer can been seen as a MMD process with second order polynomial kernel. It is very natural to apply other kernel functions for MMD in style transfer. First, if using MMD statistics to mea- sure the style discrepancy, the style loss can be deï¬ned as: | 1701.01036#13 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 14 | 1 Live = A =,MMD?|F', S'], MM A> (« (£1,.£,) + k(s',,s!;) = 2k(f!,.8!,)), k j=1 j=l (10)
(10) where Z}. is the normalization term corresponding to differ- ent scale of the feature map in the layer / and the choice of kernel function. Theoretically, different kernel function im- plicitly maps features to different higher dimensional space. Thus, we believe that different kernel functions should cap- ture different aspects of a style. We adopt the following three popular kernel functions in our experiments: (1) Linear kernel: k(x, y) = x7 y; (2) Polynomial kernel: k(x, y) = (x? y + c)¢; (3) Gaussian kernel: k(x, y) = exp (â xy), For polynomial kernel, we only use the version with d = 2. Note that matching Gram matrices is equivalent to the poly- nomial kernel with c = 0 and d = 2. For the Gaussian ker- nel, we adopt the unbiased estimation of MMD al., 2012b], which samples MM, pairs in Eq. [10] and thus can be computed with linear complexity. | 1701.01036#14 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 15 | BN Statistics Matching In [Li et al., 2017], the authors found that the statistics (i.e. mean and variance) of Batch Normalization (BN) layers contains the traits of different do- mains. Inspired by this observation, they utilized separate BN statistics for different domain. This simple operation aligns the different domain distributions effectively. As a special domain adaptation problem, we believe that BN statistics of a certain layer can also represent the style. Thus, we con- struct another style loss by aligning the BN statistics (mean and standard deviation) of two feature maps between two im- ages:
Ni 1 . . . . Cage = ye D ((eee HS)? + (ips â 51)?), i=1
where µi F l is the mean and standard deviation of the i-th feature channel among all the positions of the feature map | 1701.01036#15 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 16 | NM M M ci = tote = aypare a BP ys =1 j=1 k=1 N,N, M M ~ GNP M2 D> (( Do Fak)â + i=1 j=l k=l No NM Mm (FY Fle, F,. F! ~ GNP M2 ky Fike Fike + i=1 j=l ky=1kg=1 M M NN = 4NP MP > } » ; > ee Fin, Fey Fig ky=1k9=1 i=1 j=1 M, M, =p Fir, Fle wae D(C via Fiky) Ul gy=1kg=1 © i=1 M, M, AN2\Mf2 © AN; M, ky=1ko=1 (D2 Six Six) M M = 2 PAF) Sh. Six)) k=1 Stk, Sper Sike Sjky â 2Fiks Fins Sika Spk2) L al L L iL + Shy Sjey Sika Sky â 2Fiey Eyer Site Spo) +(osins Sika)? â 2( (Fa ha *) S S (fe âfes)? + (s'n, stig)? â (Fh, Sky ),
k1=1
k2=1
# in the layer l for image xâ: | 1701.01036#16 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 17 | k1=1
k2=1
# in the layer l for image xâ:
M 1 1 Hit = 7? Fi, orn â= ap LR j=l = pi)â, (12)
Sl correspond to the style image xs. The aforementioned style loss functions are all differen- tiable and thus the style matching problem can be solved by back propagation iteratively.
# 4 Results
In this section, we brieï¬y introduce some implementation de- tails and present results by our extended neural style transfer methods. Furthermore, we also show the results of fusing dif- ferent neural style transfer methods, which combine different style losses. In the following, we refer the four extended style transfer methods introduced in Sec. 3.2 as linear, poly, Gaus- sian and BN, respectively. The images in the experiments are collected from the public implementations of neural style transfer123. | 1701.01036#17 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 18 | Implementation Details In the implementation, we use the VGG-19 network [Simonyan and Zisserman, 2015] fol- lowing the choice in [Gatys et al., 2016]. We also adopt the relu4 2 layer for the content loss, and relu1 1, relu2 1, relu3 1, relu4 1, relu5 1 for the style loss. The default weight factor wl is set as 1.0 if it is not speciï¬ed. The target image xâ is initialized randomly and optimized iteratively until the rela- tive change between successive iterations is under 0.5%. The maximum number of iterations is set as 1000. For the method with Gaussian kernel MMD, the kernel bandwidth Ï2 is ï¬xed as the mean of squared l2 distances of the sampled pairs since
1https://github.com/dmlc/mxnet/tree/master/example/neural- style
it does not affect a lot on the visual results. Our implemen- tation is based on the MXNet [Chen et al., 2016] implemen- tation1 which reproduces the results of original neural style transfer [Gatys et al., 2016]. | 1701.01036#18 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 19 | Since the scales of the gradients of the style loss differ for different methods, and the weights a and ( in Eq. [3] affect the results of style transfer, we fix some factors to make a fair comparison. Specifically, we set a = 1 because the content losses are the same among different methods. Then, for each method, we first manually select a proper 6â such that the gradients on the x* from the style loss are of the same order of magnitudes as those from the content loss. Thus, we can manipulate a balance factor 7 (8 = 7â) to make trade-off between the content and style matching.
# 4.1 Different Style Representations
q Layer 5 Style Image Layer 1 Layer 2 Layer 3 Layer 4
Figure 1: Style reconstructions of different methods in ï¬ve layers, respectively. Each row corresponds to one method and the recon- struction results are obtained by only using the style loss Lstyle with α = 0. We also reconstruct different style representations in differ- ent subsets of layers of VGG network. For example, layer 3 con- tains the style loss of the ï¬rst 3 layers (w1 = w2 = w3 = 1.0 and w4 = w5 = 0.0).
# 2https://github.com/jcjohnson/neural-style 3https://github.com/jcjohnson/fast-neural-style | 1701.01036#19 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 21 | Figure 2: Results of the four methods (linear, poly, Gaussian and BN) with different balance factor γ. Larger γ means more emphasis on the style loss.
we ï¬rst visualize the style reconstruction results of different methods only using the style loss in Fig. 1. Moreover, Fig. 1 also compares the style representations of different layers. On one hand, for a speciï¬c method (one row), the results show that different layers capture different levels of style: The tex- tures in the top layers usually has larger granularity than those in the bottom layers. This is reasonable because each neuron in the top layers has larger receptive ï¬eld and thus has the ability to capture more global textures. On the other hand, for a speciï¬c layer, Fig. 1 also demonstrates that the style captured by different methods differs. For example, in top layers, the textures captured by MMD with a linear kernel are composed by thick strokes. Contrarily, the textures captured by MMD with a polynomial kernel are more ï¬ne grained.
# 4.2 Result Comparisons | 1701.01036#21 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 22 | # 4.2 Result Comparisons
Effect of the Balance Factor We ï¬rst explore the effect of the balance factor between the content loss and style loss by varying the weight γ. Fig. 2 shows the results of four trans- fer methods with various γ from 0.1 to 10.0. As intended, the global color information in the style image is successfully transfered to the content image, and the results with smaller γ preserve more content details as shown in Fig. 2(b) and Fig. 2(c). When γ becomes larger, more stylized textures are incorporated into the results. For example, Fig. 2(e) and Fig. 2(f) have much more similar illumination and textures with the style image, while Fig. 2(d) shows a balanced result between the content and style. Thus, users can make trade-off between the content and the style by varying γ.
(a) Content / Style (b) linear (c) poly (d) sian Gaus- (e) BN | 1701.01036#22 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 24 | Figure 4: Results of two fusion methods: BN + poly and linear + Gaussian. The top two rows are the results of ï¬rst fusion method and the bottom two rows correspond to the second one. Each column shows the results of a balance weight between the two methods. γ is set as 5.0.
Comparisons of Different Transfer Methods Fig. 3 presents the results of various pairs of content and style im- ages with different transfer methods4. Similar to matching Gram matrices, which is equivalent to the poly method, the other three methods can also transfer satisï¬ed styles from the speciï¬ed style images. This empirically demonstrates the cor- rectness of our interpretation of neural style transfer: Style transfer is essentially a domain adaptation problem, which aligns the feature distributions. Particularly, when the weight on the style loss becomes higher (namely, larger γ), the dif- ferences among the four methods are getting larger. This indicates that these methods implicitly capture different as- pects of style, which has also been shown in Fig. 1. Since these methods have their unique properties, they could pro- vide more choices for users to stylize the content image. For example, linear achieves comparable results with other meth- ods, yet requires lower computation complexity. | 1701.01036#24 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 25 | Fusion of Different Neural Style Transfer Methods Since we have several different neural style transfer methods, we propose to combine them to produce new transfer results. Fig. 4 demonstrates the fusion results of two combinations (linear + Gaussian and poly + BN). Each row presents the results with different balance between the two methods. For example, Fig. 4(b) in the ï¬rst two rows emphasize more on BN and Fig. 4(f) emphasizes more on poly. The results in
the middle columns show the interpolation between these two methods. We can see that the styles of different methods are blended well using our method. | 1701.01036#25 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 26 | the middle columns show the interpolation between these two methods. We can see that the styles of different methods are blended well using our method.
5 Conclusion Despite the great success of neural style transfer, the ratio- nale behind neural style transfer was far from crystal. The vital âtrickâ for style transfer is to match the Gram matrices of the features in a layer of a CNN. Nevertheless, subsequent literatures about neural style transfer just directly improves upon it without investigating it in depth. In this paper, we present a timely explanation and interpretation for it. First, we theoretically prove that matching the Gram matrices is equivalent to a speciï¬c Maximum Mean Discrepancy (MMD) process. Thus, the style information in neural style transfer is intrinsically represented by the distributions of activations in a CNN, and the style transfer can be achieved by distribu- tion alignment. Moreover, we exploit several other distribu- tion alignment methods, and ï¬nd that these methods all yield promising transfer results. Thus, we justify the claim that neural style transfer is essentially a special domain adapta- tion problem both theoretically and empirically. We believe this interpretation provide a new lens to re-examine the style transfer problem, and will inspire more exciting works in this research area.
4More results can be found at | 1701.01036#26 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 27 | 4More results can be found at
http://www.icst.pku.edu.cn/struct/Projects/mmdstyle/result- 1000/show-full.html
Acknowledgement This work was supported by the National Natural Science Foundation of China under Contract 61472011.
# References [Beijbom, 2012] Oscar Beijbom.
for computer vision applications. arXiv:1211.4860, 2012. Domain adaptations arXiv preprint
[Champandard, 2016] Alex J Champandard. Semantic style transfer and turning two-bit doodles into ï¬ne artworks. arXiv preprint arXiv:1603.01768, 2016.
[Chen et al., 2016] Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A ï¬exible and efï¬cient machine learning library for heterogeneous distributed systems. NIPS Workshop on Machine Learn- ing Systems, 2016.
[Efros and Freeman, 2001] Alexei A Efros and William T Freeman. Image quilting for texture synthesis and transfer. In SIGGRAPH, 2001. | 1701.01036#27 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 28 | [Efros and Freeman, 2001] Alexei A Efros and William T Freeman. Image quilting for texture synthesis and transfer. In SIGGRAPH, 2001.
[Efros and Leung, 1999] Alexei A Efros and Thomas K Le- ung. Texture synthesis by non-parametric sampling. In ICCV, 1999.
[Frigo et al., 2016] Oriel Frigo, Neus Sabater, Julie Delon, and Pierre Hellier. Split and match: Example-based adap- tive patch sampling for unsupervised style transfer. In CVPR, 2016.
[Gatys et al., 2016] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016.
[Gretton et al., 2012a] Arthur Gretton, Karsten M Borg- wardt, Malte J Rasch, Bernhard Sch¨olkopf, and Alexander Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723â773, 2012. | 1701.01036#28 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 29 | [Gretton et al., 2012b] Arthur Gretton, Dino Sejdinovic, Heiko Strathmann, Sivaraman Balakrishnan, Massimil- iano Pontil, Kenji Fukumizu, and Bharath K Sriperum- budur. Optimal kernel choice for large-scale two-sample tests. In NIPS, 2012.
[Hertzmann et al., 2001] Aaron Hertzmann, Charles E Ja- cobs, Nuria Oliver, Brian Curless, and David H Salesin. Image analogies. In SIGGRAPH, 2001.
[Johnson et al., 2016] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016.
[Kwatra et al., 2005] Vivek Kwatra, Irfan Essa, Aaron Bo- bick, and Nipun Kwatra. Texture optimization for example-based synthesis. ACM Transactions on Graph- ics, 24(3):795â802, 2005. | 1701.01036#29 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 30 | [Ledig et al., 2016] Christian Ledig, Lucas Theis, Ferenc Husz´ar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single im- age super-resolution using a generative adversarial net- work. arXiv preprint arXiv:1609.04802, 2016.
[Li and Wand, 2016] Chuan Li and Michael Wand. Combin- ing Markov random ï¬elds and convolutional neural net- works for image synthesis. In CVPR, 2016.
[Li et al., 2017] Yanghao Li, Naiyan Wang, Jianping Shi, Ji- aying Liu, and Xiaodi Hou. Revisiting batch normalization for practical domain adaptation. ICLRW, 2017.
[Liang et al., 2001] Lin Liang, Ce Liu, Ying-Qing Xu, Bain- ing Guo, and Heung-Yeung Shum. Real-time texture syn- thesis by patch-based sampling. ACM Transactions on Graphics, 20(3):127â150, 2001. | 1701.01036#30 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 31 | Jianmin Wang, and Michael I Jordan. Learning transferable fea- tures with deep adaptation networks. In ICML, 2015. [Long et al., 2016] Mingsheng Long, Jianmin Wang, and Michael I Jordan. Unsupervised domain adaptation with residual transfer networks. In NIPS, 2016.
[Pan and Yang, 2010] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on Knowl- edge and Data Engineering, 22(10):1345â1359, 2010. [Patel et al., 2015] Vishal M Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adapta- tion: A survey of recent advances. IEEE Signal Processing Magazine, 32(3):53â69, 2015.
[Ruder et al., 2016] Manuel Ruder, Alexey Dosovitskiy, and Thomas Brox. Artistic style transfer for videos. In GCPR, 2016.
[Selim et al., 2016] Ahmed Selim, Mohamed Elgharib, and Linda Doyle. Painting style transfer for head portraits us- ing convolutional neural networks. ACM Transactions on Graphics, 35(4):129, 2016. | 1701.01036#31 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.01036 | 32 | [Shih et al., 2014] YiChang Shih, Sylvain Paris, Connelly Barnes, William T Freeman, and Fr´edo Durand. Style transfer for headshot portraits. ACM Transactions on Graphics, 33(4):148, 2014.
[Simonyan and Zisserman, 2015] Karen Simonyan and An- drew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adaptation. AAAI, 2016.
[Tzeng et al., 2014] Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confu- sion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014.
[Ulyanov et al., 2016] Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In ICML, 2016. | 1701.01036#32 | Demystifying Neural Style Transfer | Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches. | http://arxiv.org/pdf/1701.01036 | Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG, cs.NE | Accepted by IJCAI 2017 | null | cs.CV | 20170104 | 20170701 | [
{
"id": "1603.01768"
},
{
"id": "1609.04802"
}
] |
1701.00299 | 1 | [email protected]
# University of Michigan 2260 Hayward St, Ann Arbor, MI, 48105, USA
# Abstract
We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward deep neural network that allows selective execution. Given an input, only a subset of D2NN neurons are executed, and the particular subset is deter- mined by the D2NN itself. By pruning unnecessary com- putation depending on input, D2NNs provide a way to im- prove computational efï¬ciency. To achieve dynamic selec- tive execution, a D2NN augments a feed-forward deep neu- ral network (directed acyclic graph of differentiable mod- ules) with controller modules. Each controller module is a sub-network whose output is a decision that controls whether other modules can execute. A D2NN is trained end to end. Both regular and controller modules in a D2NN are learnable and are jointly trained to optimize both ac- curacy and efï¬ciency. Such training is achieved by inte- grating backpropagation with reinforcement learning. With extensive experiments of various D2NN architectures on im- age classiï¬cation tasks, we demonstrate that D2NNs are general and ï¬exible, and can effectively optimize accuracy- efï¬ciency trade-offs. | 1701.00299#1 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 2 | network whose output is a decision that controls whether other modules can execute. Fig. 1 (left) illustrates a simple D2NN with one control module (Q) and two regular mod- ules (N1, N2), where the controller Q outputs a binary de- cision on whether module N2 executes. For certain inputs, the controller may decide that N2 is unnecessary and in- stead execute a dummy node D to save on computation. As an example application, this D2NN can be used for binary classiï¬cation of images, where some images can be rapidly classiï¬ed as negative after only a small amount of compu- tation.
D2NNs are motivated by the need for computational ef- ï¬ciency, in particular, by the need to deploy deep networks on mobile devices and data centers. Mobile devices are con- strained by energy and power, limiting the amount of com- putation that can be executed. Data centers need energy efï¬ciency to scale to higher throughput and to save operat- ing cost. D2NNs provide a way to improve computational efï¬ciency by selective execution, pruning unnecessary com- putation depending on input. D2NNs also make it possible to use a bigger network under a computation budget by ex- ecuting only a subset of the neurons each time.
# 1. Introduction | 1701.00299#2 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 3 | # 1. Introduction
This paper introduces Dynamic Deep Neural Networks (D2NN), a new type of feed-forward deep neural network (DNN) that allows selective execution. That is, given an input, only a subset of neurons are executed, and the partic- ular subset is determined by the network itself based on the particular input. In other words, the amount of computa- tion and computation sequence are dynamic based on input. This is different from standard feed-forward networks that always execute the same computation sequence regardless of input.
A D2NN is a feed-forward deep neural network (directed acyclic graph of differentiable modules) augmented with one or more control modules. A control module is a subA D2NN is trained end to end. That is, regular modules and control modules are jointly trained to optimize both ac- curacy and efï¬ciency. We achieve such training by integrat- ing backpropagation with reinforcement learning, necessi- tated by the non-differentiability of control modules. | 1701.00299#3 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 4 | Compared to prior work that optimizes computational ef- ï¬ciency in computer vision and machine learning, our work is distinctive in four aspects: (1) the decisions on selective execution are part of the network inference and are learned end to end together with the rest of the network, as op- posed to hand-designed or separately learned [23, 29, 2]; (2) D2NNs allow more ï¬exible network architectures and execution sequences including parallel paths, as opposed to architectures with less variance [12, 27]; (3) our D2NNs di- rectly optimize arbitrary efï¬ciency metric that is deï¬ned by the user, while previous work has no such ï¬exibility be- cause they improve efï¬ciency indirectly through sparsity
1
N2 }>LN4 (NG 4 N8 N3_ }>N5 }(_N7
Figure 1. Two D2NN examples. Input and output nodes are drawn as circles with the output nodes shaded. Function nodes are drawn as rectangles (regular nodes) or diamonds (control nodes). Dummy nodes are shaded. Data edges are drawn as solid arrows and control edges as dashed arrows. A data edge with a user deï¬ned default value is decorated with a circle. | 1701.00299#4 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 5 | constraints[5, 7, 27]. (4) our method optimizes metrics such as the F-score that does not decompose over individual ex- amples. This is an issue not addressed in prior work. We will elaborate on these differences in the Related Work sec- tion of this paper.
We perform extensive experiments to validate our D2NNs algorithms. We evaluate various D2NN architec- tures on several tasks. They demonstrate that D2NNs are general, ï¬exible, and can effectively improve computational efï¬ciency.
Our main contribution is the D2NN framework that al- lows a user to augment a static feed-forward network with control modules to achieve dynamic selective execution. We show that D2NNs allow a wide variety of topologies while sharing a uniï¬ed training algorithm. To our knowl- edge, D2NN is the ï¬rst single framework that can support various qualitatively different efï¬cient network designs, in- cluding cascade designs and coarse-to-ï¬ne designs. Our D2NN framework thus provides a new tool for designing and training computationally efï¬cient neural network mod- els.
# 2. Related work | 1701.00299#5 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 6 | # 2. Related work
Input-dependent execution has been widely used in com- puter vision, from cascaded detectors [31, 15] to hierarchi- cal classiï¬cation [10, 6]. The key difference of our work from prior work is that we jointly learn both visual features and control decisions end to end, whereas prior work either hand-designs features and control decisions (e.g. threshold- ing), or learns them separately.
In the context of deep networks, two lines of prior work have attempted to improve computational efï¬ciency. One line of work tries to eliminate redundancy in data or com- putation in a way that is input-independent. The methods include pruning networks [18, 32, 3], approximating layers with simpler functions [13, 33], and using number represen- tations of limited precision [8, 17]. The other line of work exploits the fact that not all inputs require the same amount of computation, and explores input-dependent execution of DNNs. Our work belongs to the second line, and we will In fact, our input- contrast our work mainly with them. dependent D2NN can be combined with input-independent methods to achieve even better efï¬ciency. | 1701.00299#6 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 7 | Among methods leveraging input-dependent execution, some use pre-deï¬ned execution-control policies. For ex- ample, cascade methods [23, 29] rely on manually-selected thresholds to control execution; Dynamic Capacity Net- work [2] designs a way to directly calculate a saliency map for execution control. Our D2NNs, instead, are fully learn- able; the execution-control policies of D2NNs do not re- quire manual design and are learned together with the rest of the network. | 1701.00299#7 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 8 | Our work is closely related to conditional computation methods [5, 7, 27], which activate part of a network de- pending on input. They learn policies to encourage sparse neural activations[5] or sparse expert networks[27]. Our work differs from these methods in several ways. First, our control policies are learned to directly optimize arbitrary user-deï¬ned global performance metrics, whereas condi- tional computation methods have only learned policies that encourage sparsity. In addition, D2NNs allow more ï¬exi- ble control topologies. For example, in [5], a neuron (or block of neurons) is the unit controllee of their control poli- cies; in [27], an expert is the unit controllee. Compared to their ï¬xed types of controllees, our control modules can be added in any point of the network and control arbitrary sub- networks. Also, various policy parametrization can be used in the same D2NN framework. We show a variety of param- eterizations (as different controller networks) in our D2NN examples, whereas previous conditional computation works have used some ï¬xed formats: For example, control poli- cies are parametrized as the sigmoid or softmax of an afï¬ne transformation of neurons or inputs [5, 27]. | 1701.00299#8 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 9 | Our work is also related to attention models [11, 25, 16]. Note that attention models can be categorized as hard at- tention [25, 4, 2] versus soft [16, 28]. Hard attention mod- els only process the salient parts and discard others (e.g. processing only a subset of image subwindows); in con- trast, soft attention models process all parts but up-weight the salient parts. Thus only hard attention models perform input-dependent execution as D2NNs do. However, hard attention models differ from D2NNs because hard atten- tion models have typically involved only one attention mod- ule whereas D2NNs can have multiple attention (controller) modules â conventional hard attention models are âsingle- threadedâ whereas D2NN can be âmulti-threadedâ. In addi- tion, prior work in hard attention models have not directly
optimized for accuracy-efï¬ciency trade-offs. It is also worth noting that many mixture-of-experts methods [20, 21, 14] also involve soft attention by soft gating experts: they pro- cess all experts but only up-weight useful experts, thus sav- ing no computation. | 1701.00299#9 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 10 | D2NNs also bear some similarity to Deep Sequential Neural Networks (DSNN) [12] in terms of input-dependent execution. However, it is important to note that although DSNNsâ structures can in principle be used to optimize accuracy-efï¬ciency trade-offs, DSNNs are not for the task of improving efï¬ciency and have no learning method pro- posed to optimize efï¬ciency. And the method to effectively optimize for efï¬ciency-accuracy trade-off is non-trivial as is shown in the following sections. Also, DSNNs are single- threaded: it always activates exactly one path in the com- putation graph, whereas for D2NNs it is possible to have multiple paths or even the entire graph activated.
# 3. Deï¬nition and Semantics of D2NNs | 1701.00299#10 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 11 | # 3. Deï¬nition and Semantics of D2NNs
Here we precisely deï¬ne a D2NN and describe its semantics, i.e. how a D2NN performs inference. D2NN deï¬nition A D2NN is deï¬ned as a directed acyclic graph (DAG) without duplicated edges. Each node can be one of the three types: input nodes, output nodes, and func- tion nodes. An input or output node represents an input or output of the network (e.g. a vector). A function node represents a (differentiable) function that maps a vector to another vector. Each edge can be one of the two types: data edges and control edges. A data edge represents a vector sent from one node to another, the same as in a conventional DNN. A control edge represents a control signal, a scalar, sent from one node to another. A data edge can optionally have a user-deï¬ned âdefault valueâ, representing the out- put that will still be sent even if the function node does not execute. | 1701.00299#11 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 12 | For simplicity, we have a few restrictions on valid D2NNs: (1) the outgoing edges from a node are either all data edges or all control edges (i.e. cannot be a mix of data edges and control edges); (2) if a node has an incoming con- trol edge, it cannot have an outgoing control edge. Note that these two simplicity constraints do not in any way restrict the expressiveness of a D2NN. For example, to achieve the effect of a node with a mix of outgoing data edges and con- trol edges, we can just feed its data output to a new node with outgoing control edges and let the new node be an identity function.
We call a function node a control node if its outgoing edges are control edges. We call a function node a regular node if its outgoing edges are data edges. Note that it is possible for a function node to take no data input and output a constant value. We call such nodes âdummyâ nodes. We will see that the âdefault valuesâ and âdummyâ nodes can signiï¬cantly extend the ï¬exibility of D2NNs. Hereafter we | 1701.00299#12 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 13 | may also call function nodes âsubnetworkâ, or âmodulesâ and will use these terms interchangeably. Fig. 1 illustrates simple D2NNs with all kinds of nodes and edges. D2NN Semantics Given a D2NN, we perform inference by traversing the graph starting from the input nodes. Because a D2NN is a DAG, we can execute each node in a topolog- ical order (the parents of a node are ordered before it; we take both data edges and control edges in consideration), same as conventional DNNs except that the control nodes can cause the computation of some nodes to be skipped.
After we execute a control node, it outputs a set of con- trol scores, one for each of its outgoing control edges. The control edge with the highest score is âactivatedâ, mean- ing that the node being controlled is allowed to execute. The rest of the control edges are not activated, and their controllees are not allowed to execute. For example, in Fig 1 (right), the node Q controls N2 and N3. Either N2 or N3 will execute depending on which has the higher con- trol score. | 1701.00299#13 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 14 | Although the main idea of the inference (skipping nodes) seems simple, due to D2NNsâ ï¬exibility, the inference topology can be far more complicated. For example, in the case of a node with multiple incoming control edges (i.e. controlled by multiple controllers), it should execute if any of the control edges are activated. Also, when the execution of a node is skipped, its output will be either the default value or null. If the output is the default value, subsequent execution will continue as usual. If the output is null, any downstream nodes that depend on this output will in turn skip execution and have a null output unless a default value has been set. This ânullâ effect will propagate to the rest of the graph. Fig. 1 (right) shows a slightly more complicated example with default values: if N2 skips execution and out- puts null, so will N4 and N6. But N8 will execute regardless because its input data edge has a default value. In our Ex- periments Section, we will demonstrate more sophisticated D2NNs. | 1701.00299#14 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 15 | We can summarize the semantics of D2NNs as follows: a D2NN executes the same way as a conventional DNN ex- cept that there are control edges that can cause some nodes to be skipped. A control edge is active if and only if it has the highest score among all outgoing control edges from a node. A node is skipped if it has incoming control edges and none of them is active, or if one of its inputs is null. If a node is skipped, its output will be either null or a user- deï¬ned default value. A null will cause downstream nodes to be skipped whereas a default value will not.
A D2NN can also be thought of as a program with condi- tional statements. Each data edge is equivalent to a variable that is initialized to either a default value or null. Execut- ing a function node is equivalent to executing a command assigning the output of the function to the variable. A con- trol edge is equivalent to a boolean variable initialized to | 1701.00299#15 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 16 | False. A control node is equivalent to a âswitch-caseâ state- ment that computes a score for each of the boolean variables and sets the one with the largest score to True. Checking the conditions to determine whether to execute a function is equivalent to enclosing the function with an âif-thenâ state- ment. A conventional DNN is a program with only func- tion calls and variable assignments without any conditional statements, whereas a D2NN introduces conditional state- ments with the conditions themselves generated by learn- able functions.
# 4. D2NN Learning | 1701.00299#16 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 17 | Due to the control nodes, a D2NN cannot be trained the same way as a conventional DNN. The output of the net- work cannot be expressed as a differentiable function of all trainable parameters, especially those in the control nodes. As a result, backpropagation cannot be directly applied. The main difï¬culty lies in the control nodes, whose out- puts are discretized into control decisions. This is similar to the situation with hard attention models [25, 4], which use reinforcement learning. Here we adopt the same general strategy. Learning a Single Control Node For simplicity of expo- sition we start with a special case where there is only one control node. We further assume that all parameters except those of this control node have been learned and ï¬xed. That is, the goal is to learn the parameters of the control node to maximize a user-deï¬ned reward, which in our case is a combination of accuracy and efï¬ciency. This results in a classical reinforcement learning setting: learning a control policy to take actions so as to maximize reward. We base our learning method on Q-learning [26, 30]. We let each outgoing control edge represent an action, and let the con- trol node approximate the action-value (Q) function, which is the expected return of an action given the current state (the input to the control node). | 1701.00299#17 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 18 | It is worth noting that unlike many prior works that use deep reinforcement learning, a D2NN is not recurrent. For each input to the network (e.g. an image), each control node only executes once. And the decisions of a control node completely depend on the current input. As a result, an ac- tion taken on one input has no effect on another input. That is, our reinforcement learning task consists of only one time step. Our one time-step reinforcement learning task can also be seen as a contextual bandit problem, where the context vector is the input to the control module, and the arms are the possible action outputs of the module. The one time- step setting simpliï¬es our Q-learning objective to that of the following regression task:
L = (Q(s, a) â r)2, (1)
where r is a user-deï¬ned reward, a is an action, s is the in- put to control node, and Q is computed by the control node. | 1701.00299#18 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 19 | where r is a user-deï¬ned reward, a is an action, s is the in- put to control node, and Q is computed by the control node.
As we can see, training a control node here is the same as training a network to predict the reward for each action un- der an L2 loss. We use mini-batch gradient descent; for each training example in a mini-batch, we pick the action with the largest Q, execute the rest of the network, observe a reward, and perform backpropagation using the L2 loss in Eqn. 1.
During training we also perform e-greedy exploration â instead of always choosing the action with the best Q value, we choose a random action with probability «. The hyper- parameter ¢ is initialized to 1 and decreases over time. The reward r is user defined. Since our goal is to optimize the trade-off between accuracy and efficiency, in our experi- ments we define the reward as a combination of an accuracy metric A (for example, F-score) and an efficiency metric (for example, the inverse of the number of multiplications), that is, 1A + (1 â \)E where X balances the trade-off. | 1701.00299#19 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 21 | Consider precision in the context of binary classiï¬ca- tion. Given predictions on a set of examples and the ground truth, precision is deï¬ned as the proportion of true positives among the predicted positives. Although precision can be deï¬ned on a single example, precision on a set of examples does not generally equal the average of the precisions of individual examples. In other words, precision as a metric does not decompose over individual examples and can only be computed using a set of examples jointly. This is differ- ent from decomposable metrics such as error rate, which can be computed as the average of the error rates of individ- ual examples. If we use precision as our accuracy metric, it is not clear how to deï¬ne a reward independently for each example such that maximizing this reward independently for each example would optimize the overall precision. In general, for many metrics, including precision and F-score, we cannot compute them on individual examples and aver- age the results. Instead, we must compute them using a set of examples as a whole. We call such metrics âset-based metricsâ. Our learning setup so far is ill-equipped for such metrics because a reward is deï¬ned on each example inde- pendently. | 1701.00299#21 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 22 | To address this issue we generalize the deï¬nition of a state from a single input to a set of inputs. We deï¬ne such a set of inputs as a mini-bag. With a mini-bag of images, any set-based metric can be computed and can be used to di- rectly deï¬ne a reward. Note that a mini-bag is different from a mini-batch which is commonly used for batch updates in gradient decent methods. Actually in our training, we calculate gradients using a mini-batch of mini-bags. Now, an action on a mini-bag s = (s1, . . . , sm) is now a joint action a = (a1, . . . , am) consisting of individual actions ai on ex- ample si. Let Q(s, a) be the joint action-value function on the mini-bag s and the joint action a. We constrain the para- metric form of Q to decompose over individual examples:
Q= 35 Wi,4:), (2)
where Q(si, ai) is a score given by the control node when choosing the action ai for example si. We then deï¬ne our new learning objective on a mini-bag of size m as
m HY Qs, ai))â, () =(r-âQ (s,a))? | 1701.00299#22 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 23 | m HY Qs, ai))â, () =(r-âQ (s,a))?
where r is the reward observed by choosing the joint action a on mini-bag s. That is, the control node predicts an action- value for each example such that their sum approximates the reward deï¬ned on the whole mini-bag.
It is worth noting that the decomposition of Q into sums the best joint action aâ (Eqn. 2) enjoys a nice property: under the joint action-value Q(s, a) is simply the concate- nation of the best actions for individual examples because maximizing
at = arg max(Q(s, a)) = argmax() | Q(si,a:)) (4) i=1
is equivalent to maximizing the individual summands:
aâ i = arg max ai Q(si, ai), i = 1, 2...m. (5)
That is, during test time we still perform inference on each example independently.
Another implication of the mini-bag formulation is:
râ De Asy aj)) y) 2A(si a) â , (6) Ox; a= | 1701.00299#23 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 24 | Another implication of the mini-bag formulation is:
râ De Asy aj)) y) 2A(si a) â , (6) Ox; a=
where xi is the output of any internal neuron for example i in the mini-bag. This shows that there is no change to the implementation of backpropagation except that we scale the gradient using the difference between the mini-bag Q-value Q and reward r. Joint Training of All Nodes We have described how to train a single control node. We now describe how to extend this strategy to all nodes including additional control nodes as well as regular nodes. If a D2NN has multiple control nodes, we simply train them together. For each mini-bag, we perform backpropagation for multiple losses together. Speciï¬cally, we perform inference using the current param- eters, observe a reward for the whole network, and then use
the same reward (which is a result of the actions of all con- trol nodes) to backpropagate for each control node. | 1701.00299#24 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 25 | the same reward (which is a result of the actions of all con- trol nodes) to backpropagate for each control node.
For regular nodes, we can place losses on them the same as on conventional DNNs. And we perform backpropaga- tion on these losses together with the control nodes. The implementation of backpropagation is the same as conven- tional DNNs except that each training example have a dif- ferent network topology (execution sequence). And if a node is skipped for a particular training example, then the node does not have a gradient from the example. | 1701.00299#25 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 26 | It is worth noting that our D2NN framework allows arbi- trary losses to be used for regular nodes. For example, for classiï¬cation we can use the cross-entropy loss on a regu- lar node. One important detail is that the losses on regular nodes need to be properly weighted against the losses on the control nodes; otherwise the regular losses may dominate, rendering the control nodes ineffective. One way to elimi- nate this issue is to use Q-learning losses on regular nodes as well, i.e. treating the outputs of a regular node as action- values. For example, instead of using the cross-entropy loss on the classiï¬cation scores, we treat the classiï¬cation scores as action-valuesâan estimated reward of each classiï¬cation decision. This way Q-learning is applied to all nodes in a uniï¬ed way and no additional hyperparameters are needed to balance different kinds of losses. In our experiments un- less otherwise noted we adopt this uniï¬ed approach.
# 5. Experiments | 1701.00299#26 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 27 | # 5. Experiments
We here demonstrate four D2NN structures motivated by different demands of efï¬cient network design to show its ï¬exibility and effectiveness, and compare D2NNsâ ability to optimize efï¬ciency-accuracy trade-offs with prior work. We implement the D2NN framework in Torch. Torch provides functions to specify the subnetwork architecture inside a function node. Our framework handles the high- level communication and loss propagation. High-Low Capacity D2NN Our ï¬rst experiment is with a simple D2NN architecture that we call âhigh-low capacity D2NNâ. It is motivated by that we can save computation by choosing a low-capacity subnetwork for easy examples. It consists of a single control nodes (Q) and three regular nodes (N1-N3) as in Fig. 3a). The control node Q chooses between a high-capacity N2 and a low-capacity N3; the N3 has fewer neurons and uses less computation. The control node itself has orders of magnitude fewer computation than regular nodes (this is true for all D2NNs demonstrated). | 1701.00299#27 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 28 | We test this hypothesis using a binary classiï¬cation task in which the network classiï¬es an input image as face or non-face. We use the Labeled Faces in the Wild [19, 22] dataset. Speciï¬cally, we use the 13k ground truth face crops (112Ã112 pixels) as positive examples and randomly sampled 130k background crops (with an intersection over union less than 0.3) as negative examples. We hold out 11k
8 @
a) High-Low (LFW-B) b) Cascade (LFW-B) c) Chain (LFW-B) d) Hierarchy (ILSVRC-10) 0.8 1 d 1 a a 08 2 8 2 G06 0.6 807 S09 ââ D2NN 3 0.4 Y- 206 ââ D2NN L 8 Q2 ââ D2NN 0.2 0.5 â*= static NNs LJ o* 4_NN 0 04 0.8 0 0 0.20.40.60.8 1 0 0.20.40.60.8 1 0 0.20.40.60.8 1 0.2040.60.8 1 cost cost cost cost
Figure 2. The accuracy-cost or fscore-cost curves of various D2NN architectures, as well as conventional DNN baselines consisting of only regular nodes. | 1701.00299#28 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 29 | Figure 2. The accuracy-cost or fscore-cost curves of various D2NN architectures, as well as conventional DNN baselines consisting of only regular nodes.
a) High-Low b) Cascade N1 N2 A -d c) Chain d) Hierarchy
Figure 3. Four different D2NN architectures.
images for validation and 22k for testing. We refer to this dataset as LFW-B and use it as a testbed to validate the ef- fectiveness of our new D2NN framework.
To evaluate performace we measure accuracy using the F1 score, a better metric than percentage of correct pre- dictions for an unbalanced dataset. We measure computa- tional cost using the number of multiplications following prior work [2, 27] and for reproductivity. Speciï¬cally, we use the number of multiplications (control nodes included), normalized by a conventional DNN consisting of N1 and N2, that is, the high-capacity execution path. Note that our D2NNs also allow to use other efï¬ciency measurement such as run-time, latency. | 1701.00299#29 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 30 | During training we deï¬ne the Q-learning reward as a lin- ear combination of accuracy A and efï¬ciency E (negative cost): r = λA + (1 â λ)E where λ â [0, 1]. We train instances of high-low capacity D2NNs using different λâs. As λ increases, the learned D2NN trades off efï¬ciency for accuracy. Fig. 2a) plots the accuracy-cost curve on the test set; it also plots the accuracy and efï¬ciency achieved by a conventional DNN with only the high capacity path N1+N2 (High NN) and a conventional DNN with only the low ca- pacity path N1+N3 (Low NN).
As we can see, the D2NN achieves a trade-off curve close to the upperbound: there are points on the curve that are as fast as the low-capacity node and as accurate as the high-capacity node. Fig. 4(left) plots the distribution of ex- amples going through different execution paths. It shows that as λ increases, accuracy becomes more important and more examples go through the high-capacity node. These
results suggest that our learning algorithm is effective for networks with a single control node. | 1701.00299#30 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 31 | results suggest that our learning algorithm is effective for networks with a single control node.
With inference efï¬ciency improved, we also observe that for training, a D2NN typically takes 2-4 times more iter- ations to converge than a DNN, depending on particular model capacities, conï¬gurations and trade-offs. Cascade D2NN We next experiment with a more sophisti- cated design that we call a âcascade D2NNâ (Fig. 3b). It is inspired by the standard cascade design commonly used in computer vision. The intuition is that many negative ex- amples may be rejected early using simple features. The cascade D2NN consists of seven regular nodes (N1-N7) and three control nodes (Q1-Q3). N1-N7 form 4 cascade stages (i.e. 4 conventional DNNs, from small to large) of the cas- cade: N1+N2, N3+N4, N5+N6, N7. Each control node de- cides whether to execute the next cascade stage or not. | 1701.00299#31 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 32 | We evaluate the network on the same LFW-B face clas- siï¬cation task using the same evaluation protocol as in the high-low capacity D2NN. Fig. 2b) plots the accuracy- cost tradeoff curve for the D2NN. Also included are the accuracy-cost curve (âstatic NNsâ) achieved by the four conventional DNNs as baselines, each trained with a cross- entropy loss. We can see that the cascade D2NN can achieve a close to optimal trade-off, reducing computation signiï¬- cantly with negligible loss of accuracy. In addition, we can see that our D2NN curve outperforms the trade-off curve achieved by varying the design and capacity of static con- ventional networks. This result demonstrates that our al- gorithm is successful for jointly training multiple control nodes. | 1701.00299#32 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 33 | For a cascade, wall time of inference is often an impor- tant consideration. Thus we also measure the inference wall time (excluding data loading with 5 runs) in this Cascade D2NN. We ï¬nd that a 82% wall-time cost corresponds to a 53% number-of-multiplication cost; and a 95% corresponds to a 70%. Deï¬ning reward directly using wall time can fur- ther reduce the gap. Chain D2NN Our third design is a âChain D2NNâ (Fig. 3c). The network is shaped as a chain, where each link consists of a control node selecting between two (or more) regular nodes. In other words, we perform a sequence of vector-to- vector transforms; for each transform we choose between several subnetworks. One scenario that we can use this D2NN is that the conï¬guration of a conventional DNN (e.g. number of layers, ï¬lter sizes) cannot be fully decided. Also, it can simulate shortcuts between any two layers by using an identity function as one of the transforms. This chain D2NN is qualitatively different from other D2NNs with a tree-shaped data graph because it allows two divergent data paths to merge again. That is, the number of possible exe- cution paths can be exponential to the number of nodes. | 1701.00299#33 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 34 | In Fig. 3c), the ï¬rst link is that Q1 chooses between a low-capacity N2 and a high-capacity N3. If one of them is chosen, the other will output a default value zero. The node N4 adds the outputs of N2 and N3 together. Fig. 2c) plots the accuracy-cost curve on the LFW-B task. The two baselines are: a conventional DNN with the lowest capacity path (N1-N2-N5-N8-N10), and a conventional DNN with the highest capacity path (N1-N3-N6-N9-N10). The cost is measured as the number of multiplications, normalized by the cost of the high-capacity baseline. | 1701.00299#34 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 35 | Fig. 2c) shows that the chain D2NN achieves a trade- off curve close to optimal and can speed up computation signiï¬cantly with little accuracy loss. This shows that our learning algorithm is effective for a D2NN whose data graph is a general DAG instead of a tree. Hierarchical D2NN In this experiment we design a D2NN for hierarchical multiclass classiï¬cation. The idea is to ï¬rst classify images to coarse categories and then to ï¬ne cat- egories. This idea has been explored by numerous prior works [24, 6, 10], but here we show that the same idea can be implemented via a D2NN trained end to end. | 1701.00299#35 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 36 | We use ILSVRC-10, a subset of the ILSVRC-65 [9]. In ILSVRC-10, 10 classes are organized into a 3-layer hierar- chy: 2 superclasses, 5 coarse classes and 10 leaf classes. Each class has 500 training images, 50 validation images, and 150 test images. As in Fig. 3d), the hierarchy in this D2NN mirrors the semantic hierarchy in ILSVRC-10. An image ï¬rst goes through the root N1. Then Q1 decides whether to descend the left branch (N2 and its children), and Q2 decides whether to descend the right branch (N3 and its children). The leaf nodes N4-N8 are each responsible for classifying two ï¬ne-grained leaf classes. It is important to
note that an input image can go down parallel paths in the hierarchy, e.g. descending both the left branch and the right branch, because Q1 and Q2 make separate decisions. This âmulti-threadingâ allows the network to avoid committing to a single path prematurely if an input image is ambigu- ous. | 1701.00299#36 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 37 | Fig. 2d) plots the accuracy-cost curve of our hierarchi- cal D2NN. The accuracy is measured as the proportion of correctly classiï¬ed test examples. The cost is measured as the number of multiplications, normalized by the cost of a conventional DNN consisting only of the regular nodes (de- noted as NN in the ï¬gure). We can see that the hierarchi- cal D2NN can match the accuracy of the full network with about half of the computational cost. | 1701.00299#37 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 38 | Fig. 4(right) plots for the hierarchical D2NN the distri- bution of examples going through execution sequences with different numbers of nodes activated. Due to the parallelism of D2NN, there can be many different execution sequences. We also see that as λ increases, accuracy is given more weight and more nodes are activated. Comparison with Dynamic Capacity Networks In this experiment we empirically compare our approach to closely related prior work. Here we compare D2NNs with Dynamic Capacity Networks (DCN) [2], for which efï¬cency mea- surement is the absolute number of multiplications. Given an image, a DCN applies an additional high capacity sub- network to a set of image patches, selected using a hand- designed saliency based policy. The idea is that more inten- sive processing is only necessary for certain image regions. To compare, we evaluate with the same multiclass clas- siï¬cation task on the Cluttered MNIST [25], which consists of MNIST digits randomly placed on a background clut- tered with fragments of other digits. We train a chain D2NN of length 4 , which implements the same idea of choosing a high-capacity alternative subnetwork | 1701.00299#38 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 39 | tered with fragments of other digits. We train a chain D2NN of length 4 , which implements the same idea of choosing a high-capacity alternative subnetwork for certain inputs. Fig. 6 plots the accuracy-cost curve of our D2NN as well as the accuracy-cost point achieved by the DCN in [2]âan accuracy of 0.9861 and and a cost of 2.77Ã107. The closest point on our curve is an slightly lower accuracy of 0.9698 but slightly better efï¬ciency (a cost of 2.66 à 107). Note that although our accuracy of 0.9698 is lower, it compares favorably to those of other state-of-the-art methods such as DRAW [16]: 0.9664 and RAM [25]: 0.9189. Visualization of Examples in Different Paths In Fig. 5 (left), we show face examples in the high-low D2NN for λ=0.4. Examples in low-capacity path are generally eas- ier (e.g. more frontal) than examples in high-capacity path. In Fig. 5 (right), we show car examples in the hierarchical D2NN with 1) a single path executed and 2) the full graph executed | 1701.00299#39 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 40 | path. In Fig. 5 (right), we show car examples in the hierarchical D2NN with 1) a single path executed and 2) the full graph executed (for λ=1). They match our intuition that examples with a single path executed should be easier (e.g. less occlu- sion) to classify than examples with the full graph executed. CIFAR-10 Results We train a Cascade D2NN on CIFAR= Ml, =0.525 Ta=0.8 M@a=1 8 ® o.6F Oo £08 © 0.4) 5 0.6 2 0.2; L © 2 0 go4 1 8 0.2 80 rs \ lidasa® A 5 : fo) 7 oe) | 1701.00299#40 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 42 | Figure 5. Examples with different paths in a high-low D2NN (left) and a hierarchical D2NN (right).
0.8 accuracy oO [2) gos BR ââD2NN « DCN 0 2 4 6 8 #multiplications x10" o iN fo}
# 7. Acknowledgments
This work is partially supported by the National Science Foundation under Grant No. 1539011 and gifts from Intel.
# Appendix
# A. Implementation Details
Figure 6. Accuracy-cost curve for a chain D2NN on the CMNIST task compared to DCN [2]. | 1701.00299#42 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 43 | # Appendix
# A. Implementation Details
Figure 6. Accuracy-cost curve for a chain D2NN on the CMNIST task compared to DCN [2].
10 where the corresponded DNN baseline is the ResNet- 110. We initialize this D2NN with pre-trained ResNet-110 weights, apply cross-entropy losses on regular nodes, and tune the mixed-loss weight as explained in Sec. 4. We see a 30% reduction of cost with a 2% loss (relative) on accuracy, and a 62% reduction of cost with a 7% loss (relative) on ac- curacy. The D2NNâs ability to improve efï¬ciency relies on the assumption that not all inputs require the same amount of computation. In CIFAR-10, all images are low resolution (32 à 32), and it is likely that few images are signiï¬cantly easier to classify than others. As a result, the efï¬ciency im- provement is modest compared to other datasets.
We implement the D2NN framework in Torch [1]. Torch already provides implementations of conventional neural network modules (nodes). So a user can specify the sub- network architecture inside a control node or a regular node using existing Torch functionalities. Our framework then handles the communication between the user-deï¬ned nodes in the forward and backward pass. | 1701.00299#43 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.