id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1701.06538#33
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Andrew Davis and Itamar Arel. Low-rank approximations for conditional feedforward computation in deep neural networks. arXiv preprint arXiv:1312.4461, 2013. Marc Peter Deisenroth and Jun Wei Ng. Distributed Gaussian processes. In ICML, 2015. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization, 2010. Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heaï¬
1701.06538#32
1701.06538#34
1701.06538
[ "1502.03167" ]
1701.06538#34
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
eld. Edinburghâ s phrase-based In Proceedings of the Ninth Workshop on Statistical machine translation systems for wmt-14. Machine Translation, 2014. David Eigen, Marcâ Aurelio Ranzato, and Ilya Sutskever. Learning factored representations in a deep mixture of experts. arXiv preprint arXiv:1312.4314, 2013. Ekaterina Garmash and Christof Monz. Ensemble learning for multi-source neural machine transla- tion. In staff.science.uva.nl/c.monz, 2016.
1701.06538#33
1701.06538#35
1701.06538
[ "1502.03167" ]
1701.06538#35
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
10 # Under review as a conference paper at ICLR 2017 Felix A. Gers, Jürgen A. Schmidhuber, and Fred A. Cummins. Learning to forget: Continual pre- diction with lstm. Neural Computation, 2000. Audrunas Gruslys, Rémi Munos, Ivo Danihelka, Marc Lanctot, and Alex Graves. Memory-efï¬ cient backpropagation through time. CoRR, abs/1606.03401, 2016. URL http://arxiv.org/ abs/1606.03401. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. IEEE Conference on Computer Vision and Pattern Recognition, 2015. Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, et al.
1701.06538#34
1701.06538#36
1701.06538
[ "1502.03167" ]
1701.06538#36
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 1997. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
1701.06538#35
1701.06538#37
1701.06538
[ "1502.03167" ]
1701.06538#37
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. Adaptive mixtures of local experts. Neural Computing, 1991. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Googleâ
1701.06538#36
1701.06538#38
1701.06538
[ "1502.03167" ]
1701.06538#38
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
s multilingual neural machine translation system: Enabling zero-shot translation. CoRR, abs/1611.04558, 2016. URL http://arxiv.org/abs/1611.04558. Michael I. Jordan and Robert A. Jacobs. Hierarchical mixtures of experts and the EM algorithm. Neural Computing, 1994. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. Diederik Kingma and Jimmy Ba. Adam:
1701.06538#37
1701.06538#39
1701.06538
[ "1502.03167" ]
1701.06538#39
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
A method for stochastic optimization. In ICLR, 2015. Reinhard Kneser and Hermann. Ney. Improved backingoff for m-gram language modeling., 1995. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classiï¬ cation with deep convo- lutional neural networks. In NIPS, 2012. Quoc V. Le, Marcâ Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeffrey Dean, and Andrew Y. Ng.
1701.06538#38
1701.06538#40
1701.06538
[ "1502.03167" ]
1701.06538#40
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Building high-level features using large scale unsupervised learning. In ICML, 2012. Patrick Gallinari Ludovic Denoyer. Deep sequential neural network. arXiv preprint arXiv:1410.0510, 2014. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention- based neural machine translation. EMNLP, 2015a. Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. Addressing the rare word problem in neural machine translation. ACL, 2015b. Carl Edward Rasmussen and Zoubin Ghahramani.
1701.06538#39
1701.06538#41
1701.06538
[ "1502.03167" ]
1701.06538#41
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Inï¬ nite mixtures of Gaussian process experts. NIPS, 2002. Hasim Sak, Andrew W Senior, and Françoise Beaufays. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In INTERSPEECH, pp. 338â 342, 2014. Mike Schuster and Kaisuke Nakajima. Japanese and Korean voice search. ICASSP, 2012. Babak Shahbaba and Radford Neal.
1701.06538#40
1701.06538#42
1701.06538
[ "1502.03167" ]
1701.06538#42
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Nonlinear models using dirichlet process mixtures. JMLR, 2009. 11 # Under review as a conference paper at ICLR 2017 Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014. Lucas Theis and Matthias Bethge. Generative image modeling using spatial LSTMs. In NIPS, 2015. Volker Tresp. Mixtures of Gaussian Processes. In NIPS, 2001. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Å ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean.
1701.06538#41
1701.06538#43
1701.06538
[ "1502.03167" ]
1701.06538#43
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Googleâ s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. Bangpeng Yao, Dirk Walther, Diane Beck, and Li Fei-fei. Hierarchical mixture of classiï¬ cation experts uncovers interactions between brain regions. In NIPS. 2009. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014. Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forward connections for neural machine translation. arXiv preprint arXiv:1606.04199, 2016.
1701.06538#42
1701.06538#44
1701.06538
[ "1502.03167" ]
1701.06538#44
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
12 # Under review as a conference paper at ICLR 2017 # APPENDICES A LOAD-BALANCING LOSS As discussed in section 4, for load-balancing purposes, we want to deï¬ ne an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in back- propagation. Instead, we deï¬ ne a smooth estimator Load(X) of the number of examples assigned to each expert for a batch X of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function.
1701.06538#43
1701.06538#45
1701.06538
[ "1502.03167" ]
1701.06538#45
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
We deï¬ ne P (x, i) as the probability that G(x)i is nonzero, given a new random choice of noise on element i, but keeping the already-sampled choices of noise on the other elements. To compute P (x, i), we note that the G(x)i is nonzero if and only if H(x)i is greater than the kth-greatest element of H(x) excluding itself. The probability works out to be: P(z,i) = Pr((e -W,); + StandardNormal() - Softplus((x - Whoise)i) A (8) > kth_excluding(H (2), k, i) Where kth_excluding(v, k, i) means the kth highest component of v, excluding component i. Sim- plifying, we get: . (a -W,)i â kth_excluding(H (x), k, i) P(x,i)=® (2,4) ( Softplus((x - Wnhoise)i) ) o Where Φ is the CDF of the standard normal distribution. Load(X)i = P (x, i) xâ X (10) We can now deï¬ ne the load loss to be the square of the coefï¬ cient of variation of the load vector, multiplied by a hand-tuned scaling factor wload. Lload(X) = wload · CV (Load(X))2 (11) Initial Load Imbalance: To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices Wg and Wnoise to all zeros, which yields no signal and some noise.
1701.06538#44
1701.06538#46
1701.06538
[ "1502.03167" ]
1701.06538#46
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Experiments: We trained a set of models with identical architecture (the MoE-256 model de- scribed in Appendix C), using different values of wimportance and wload. We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefï¬ cients of variation in Importance and Load, as well as ratio of the load on the most overloaded expert to the average load. This last value is signiï¬ cant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches. Table 6: Experiments with different combinations of losses. wimportance wload Test Perplexity CV (Importance(X)) CV (Load(X)) max(Load(X)) mean(Load(X)) 17.80 1.47 1.15 1.14 1.37 1.07
1701.06538#45
1701.06538#47
1701.06538
[ "1502.03167" ]
1701.06538#47
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
13 # Under review as a conference paper at ICLR 2017 Results: Results are reported in Table 6. All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of wload had lower loads on the most overloaded expert. B HIERACHICAL MIXTURE OF EXPERTS If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted com- bination of â experts", each of which is itself a secondary mixture-of-experts with its own gating network.3 If the hierarchical MoE consists of a groups of b experts each, we denote the primary gat- ing network by Gprimary, the secondary gating networks by (G1, G2..Ga), and the expert networks by (E0,0, E0,1..Ea,b). The output of the MoE is given by: a b uit = S22 Gprimary (i+ Gil); + Bi (12) i=1 j=1 Our metrics of expert utilization change to the following: Importance y(X) i,j = > Gprimary(X)i + Gi(x);j (13) weX LoadH (X)i,j = Loadprimary(X)i · Loadi(X (i))j |X (i)| (14) Loadprimary and Loadi deonte the Load functions for the primary gating network and ith sec- ondary gating network respectively. X (i) denotes the subset of X for which Gprimary(x)i > 0. It would seem simpler to let LoadH (X)i,j = Loadi(Xi)j , but this would not have a gradient with respect to the primary gating network, so we use the formulation above.
1701.06538#46
1701.06538#48
1701.06538
[ "1502.03167" ]
1701.06538#48
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
C 1 BILLION WORD LANGUAGE MODELING BENCHMARK - EXPERIMENTAL DETAILS 8-MILLION-OPERATIONS-PER-TIMESTEP MODELS Model Architecture: Our model consists of ï¬ ve layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer (Hochreiter & Schmidhuber, 1997; Gers et al., 2000), a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput (Zaremba et al., 2014) to the layer output, dropping each activation with probability DropP rob, otherwise dividing by (1 â DropP rob). After dropout, the output of the previous layer is added to the layer output. This residual connection encourages gradient ï¬
1701.06538#47
1701.06538#49
1701.06538
[ "1502.03167" ]
1701.06538#49
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
ow (He et al., 2015). MoE Layer Architecture: Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains [512 â 1024] + [1024 â 512] = 1M parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096- h. For the hierarchical MoE layers, the ï¬ rst level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section 2.1) with k = 4 for the ordinary MoE layers and k = 2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M.
1701.06538#48
1701.06538#50
1701.06538
[ "1502.03167" ]
1701.06538#50
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
3 We have not found the need for deeper hierarchies. 14 # Under review as a conference paper at ICLR 2017 Computationally-Matched Baselines: The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity: â ¢ MoE-1-Wide: The MoE layer consists of a single "expert" containing one ReLU-activated hidden layer of size 4096.
1701.06538#49
1701.06538#51
1701.06538
[ "1502.03167" ]
1701.06538#51
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
â ¢ MoE-1-Deep: The MoE layer consists of a single "expert" containing four ReLU-activated hidden layers, each with size 1024. 4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers. â ¢ LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The out- put of the LSTM is projected down to 512 dimensions (Sak et al., 2014). The next timestep of the LSTM receives the projected output. This is identical to one of the models published in (Jozefowicz et al., 2016). We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones. Training: The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section 3. Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer (Kingma & Ba, 2015). The base learning rate was increased linearly for the ï¬ rst 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efï¬ ciently using importance sampling similarly to the models in (Jozefowicz et al., 2016). For each model, we performed a hyper-parmeter search to ï¬ nd the best dropout probability, in increments of 0.1. To ensure balanced expert utilization we set wimportance = 0.1 and wload = 0.1, as described in Section 4 and Appendix A. Results: We evaluate our model using perplexity on the holdout dataset, used by (Chelba et al., 2013; Jozefowicz et al., 2016). We follow the standard procedure and sum over all the words in- cluding the end of sentence symbol. Results are reported in Table 7.
1701.06538#50
1701.06538#52
1701.06538
[ "1502.03167" ]
1701.06538#52
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
For each model, we report the test perplexity, the computational budget, the parameter counts, the value of DropP rob, and the computational efï¬ ciency. Table 7: Model comparison on 1 Billion Word Language Modeling Benchmark. Models marked with * are from (Jozefowicz et al., 2016). Model Kneser-Ney 5-gram* LSTM-512-512* LSTM-1024-512* LSTM-2048-512* LSTM-2048-512 4xLSTM-512 MoE-1-Wide MoE-1-Deep MoE-4 MoE-32 MoE-256 MoE-256-h MoE-1024-h MoE-4096-h 2xLSTM-8192-1024* MoE-34M MoE-143M Test Test Perplexity Perplexity 10 epochs (ï¬
1701.06538#51
1701.06538#53
1701.06538
[ "1502.03167" ]
1701.06538#53
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
nal) 67.6 54.1 48.2 43.7 45.0 44.7 46.0 46.1 45.7 45.0 39.7 35.7 36.0 34.6 34.1 34.7 31.3 28.0 30.6 Total Drop- TFLOPS per GPU (observed) ops/timestep #Params excluding (millions) embed. & softmax #Params P rob (millions) (billions) 1.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.9 1.1 1.1 1.9 5.1 1.8 6.0 6.0 0.00001 2.4 4.7 9.4 9.4 8.4 8.4 8.4 8.4 8.4 8.6 8.4 8.5 8.9 151.0 33.8 142.7 2.4 4.7 9.4 9.4 8.4 8.4 8.4 8.4 37.8 272.9 272.9 1079.0 4303.4 151.0 4313.9 4371.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.2 0.2 0.25 0.3 0.4 0.61 1.21 1.07 1.29 1.29 0.52 0.87 0.81 0.89 0.90 0.74 1.09 1.22 1.56 15
1701.06538#52
1701.06538#54
1701.06538
[ "1502.03167" ]
1701.06538#54
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
# Under review as a conference paper at ICLR 2017 C.2 MORE EXPENSIVE MODELS We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 (Sak et al., 2014). MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best DropP rob for each model, and trained each model for 10 epochs. The two models achieved test perplexity of 31.3 and 28.0 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table 7. The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by 18%. D 100 BILLION WORD GOOGLE NEWS CORPUS - EXPERIMENTAL DETAILS
1701.06538#53
1701.06538#55
1701.06538
[ "1502.03167" ]
1701.06538#55
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Model Architecture: The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the ï¬ rst level branching factors are 32, 32, 64, 128, 256 and 256, respectively.
1701.06538#54
1701.06538#56
1701.06538
[ "1502.03167" ]
1701.06538#56
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Training: Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the param- eters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words. We implement several memory optimizations in order to ï¬ t up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage: The Adam optimizer (Kingma & Ba, 2015) keeps ï¬ rst and second moment estimates of the per- parameter gradients. This triples the required memory.
1701.06538#55
1701.06538#57
1701.06538
[ "1502.03167" ]
1701.06538#57
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
To avoid keeping a ï¬ rst-moment estimator, we set β1 = 0. To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad (Duchi et al., 2010). Table 8: Model comparison on 100 Billion Word Google News Dataset Model Kneser-Ney 5-gram 4xLSTM-512 MoE-32 MoE-256-h MoE-1024-h MoE-4096-h MoE-16384-h MoE-65536-h MoE-131072-h Test Test Perplexity Perplexity 1 epoch .1 epochs 45.3 67.1 47.0 54.5 40.4 48.5 35.3 42.8 32.7 40.3 30.9 38.9 38.2 29.7 28.9 38.2 29.2 39.8 ops/timestep #Params excluding TFLOPS per GPU (billions) (observed) Total (millions) embed. & softmax #Params (millions) 0.00001 8.4 8.4 8.4 8.5 8.6 8.8 9.2 9.7 8.4 37.8 272.9 1079.0 4303.4 17201.0 68791.0 137577.6 76.0 0.1 0.1 0.4 1.2 4.4 17.3 68.9 137.7 1.23 0.83 1.11 1.14 1.07 0.96 0.72 0.30 Results: We evaluate our model using perplexity on a holdout dataset. Results are reported in Table 8. Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE
1701.06538#56
1701.06538#58
1701.06538
[ "1502.03167" ]
1701.06538#58
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
16 # Under review as a conference paper at ICLR 2017 model than for the baseline model. It is notable that the measured computational efï¬ ciency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing (Kneser & Ney, 1995).4 E MACHINE TRANSLATION - EXPERIMENTAL DETAILS Model Architecture for Single Language Pair MoE Models: Our model is a modiï¬ ed version of the GNMT model described in (Wu et al., 2016). To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the ï¬ rst decoder LSTM receiving output from and providing input for the attention 5. All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient ï¬
1701.06538#57
1701.06538#59
1701.06538
[ "1502.03167" ]
1701.06538#59
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
ow (He et al., 2015). Similar to GNMT, to effectively deal with rare words, we used sub- word units (also known as â wordpieces") (Schuster & Nakajima, 2012) for inputs and outputs in our system. We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in (Wu et al., 2016). We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with ï¬ at MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts.
1701.06538#58
1701.06538#60
1701.06538
[ "1502.03167" ]
1701.06538#60
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The ï¬ at MoE layers use k = 4 and the hierarchical MoE models use k = 2 at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains [512 â 2048] + [2048 â 512] = 2M parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix F. Model Architecture for Multilingual MoE Model: We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section 2.1, not the scheme from Appendix F. The MoE layers in the encoder and decoder are non-hierarchical MoEs with n = 512 experts, and k = 2. Each expert has a larger hidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep.
1701.06538#59
1701.06538#61
1701.06538
[ "1502.03167" ]
1701.06538#61
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Training: We trained our networks using the Adam optimizer (Kingma & Ba, 2015). The base learning rate was increased linearly for the ï¬ rst 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to (Wu et al., 2016), we applied dropout (Zaremba et al., 2014) to the output of all embedding, LSTM and MoE layers, using DropP rob = 0.4. Training was done synchronously on a cluster of up to 64 GPUs as described in section 3. Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU. To ensure balanced expert utilization we set wimportance = 0.01 and wload = 0.01, as described in Section 4 and Appendix A. Metrics: We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in (Luong et al., 2015a). 4While the original size of the corpus was 130 billion words, the neural models were trained for a maximum of 100 billion words. The reported Kneser-Ney 5-gram models were trained over 13 billion and 130 billion words respectively, giving them a slight advantage over the other reported results. 5For performance reasons, we use a slightly different attention function from the one described in (Wu et al., 2016) - See Appendix G
1701.06538#60
1701.06538#62
1701.06538
[ "1502.03167" ]
1701.06538#62
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
17 # Under review as a conference paper at ICLR 2017 Results: Tables 2, 3 and 4 in Section 5.3 show comparisons of our results to other published methods. Figure 4 shows test perplexity as a function of number of words in the (training dataâ s) source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve. oe HExperts=0 yoy HExperts=32 aoa HExperts=512 moa HExperts=2048 o* HExperts=0 soa HExperts=2048 2 3 Ba) 5 \ See: a cme | as ide ee 4 30] tay y Saree ae NN ty A enn erty | 25| ey ey 20 2 a 40 3s 78 15 20 Number of source words processed 109 Number of source words processed toto Figure 4: Perplexity on WMTâ 14 Enâ Fr (left) and Google Production Enâ Fr (right) datasets as a function of number of words processed. The large differences between models at the beginning of training are due to different batch sizes. All models incur the same computational budget (85M ops/timestep) except the one with no experts. We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table 9. For example, one expert is used when the indeï¬ nite article â a" introduces the direct object in a verb phrase indicating importance or leadership.
1701.06538#61
1701.06538#63
1701.06538
[ "1502.03167" ]
1701.06538#63
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Table 9: Contexts corresponding to a few of the 2048 experts in the MoE layer in the encoder portion of the WMTâ 14 Enâ Fr translation model. For each expert i, we sort the inputs in a training batch in decreasing order of G(x)i, and show the words surrounding the corresponding positions in the input sentences. Expert 381 ... with researchers , ... ... to innovation . ... tics researchers . ... the generation of ... ... technology innovations is ... ... technological innovations , ... ... support innovation throughout ... ... role innovation will ... ... research scienti st ... ... promoting innovation where ... ... Expert 752 ... plays a core ... ... plays a critical ... ... provides a legislative ... ... play a leading ... ... assume a leadership ... ... plays a central ... ... taken a leading ... ... established a reconciliation ... ... played a vital ... ... have a central ... ... Expert 2004 ... with rapidly growing ... ... under static conditions ... ... to swift ly ... ... to dras tically ... ... the rapid and ... ... the fast est ... ... the Quick Method ... ... rec urrent ) ... ... provides quick access ... ... of volatile organic ... ... F STRICTLY BALANCED GATING Due to some peculiarities in our infrastructure which have since been ï¬
1701.06538#62
1701.06538#64
1701.06538
[ "1502.03167" ]
1701.06538#64
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
xed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below. Recall that we deï¬ ne the softmax gating function to be: GÏ (x) = Sof tmax(x · Wg) (15) 18 # Under review as a conference paper at ICLR 2017 Sparse Gating (alternate formulation): To obtain a sparse gating vector, we multiply GÏ (x) component-wise with a sparse mask M (GÏ (x)) and normalize the output. The mask itself is a function of GÏ (x) and speciï¬
1701.06538#63
1701.06538#65
1701.06538
[ "1502.03167" ]
1701.06538#65
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
es which experts are assigned to each input example: Ga(2):M(Go(0))i Say Go(@)M(Go(@)); G(x): (16) Top-K Mask: To implement top-k gating in this formulation, we would let M (v) = T opK(v, k), where: 1 if v; is in the top k elements of v. 17 0 otherwise. ay TopK(v, k); = { Batchwise Mask: To force each expert to receive the exact same number of examples, we intro- duce an alternative mask function, Mbatchwise(X, m), which operates over batches of input vectors. Instead of keeping the top k values per example, we keep the top m values per expert across the training batch, where m = k|X| 1 if Xj, is in the top m values for to expert 7 18 0 otherwise (18) Mbatchwise(X;â ¢)j,i = { As our experiments suggest and also observed in (Ioffe & Szegedy, 2015), using a batchwise func- tion during training (such as Mbatchwise) requires modiï¬ cations to the inference when we may not have a large batch of examples. Our solution to this is to train a vector T of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time: 1 ifa; >T; 0 otherwise a9) Mthreshota(,T)i = { To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical. |X| n Loatchwise(X,T,m) = Ss So (Minresnota(, T)i â Moatchwise(X,m)j,i)(Xj,i -â Ti) (20) j=l i=l G ATTENTION FUNCTION The attention mechanism described in GNMT (Wu et al., 2016) involves a learned â Attention Func- tion" A(xi, yj) which takes a â source vector" xi and a â target vector" yj, and must be computed for every source time step i and target time step j. In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size n. It can be expressed as:
1701.06538#64
1701.06538#66
1701.06538
[ "1502.03167" ]
1701.06538#66
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
n Aanar(%is yj) = Ss Vatanh((xiU )a + (yjW)a) (21) d=1 Where U and W are trainable weight matrices and V is a trainable weight vector. For performance reasons, in our models, we used a slightly different attention function: A(ais yj) = 3° Vatanh((a;U)a)tanh((yjW)a) (22) d=1 19 # Under review as a conference paper at ICLR 2017
1701.06538#65
1701.06538#67
1701.06538
[ "1502.03167" ]
1701.06538#67
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
With our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions. 20
1701.06538#66
1701.06538
[ "1502.03167" ]
1701.01036#0
Demystifying Neural Style Transfer
7 1 0 2 l u J 1 ] V C . s c [ 2 v 6 3 0 1 0 . 1 0 7 1 : v i X r a # Demystifying Neural Style Transfer Yanghao Liâ Xiaodi Houâ ¡ â Institute of Computer Science and Technology, Peking University â ¡ TuSimple [email protected] [email protected] [email protected] [email protected]
1701.01036#1
1701.01036
[ "1603.01768" ]
1701.01036#1
Demystifying Neural Style Transfer
# Abstract Neural Style Transfer [Gatys et al., 2016] has re- cently demonstrated very exciting results which catches eyes in both academia and industry. De- spite the amazing results, the principle of neural style transfer, especially why the Gram matrices could represent style remains unclear. In this pa- per, we propose a novel interpretation of neural style transfer by treating it as a domain adapta- tion problem. Speciï¬ cally, we theoretically show that matching the Gram matrices of feature maps is equivalent to minimize the Maximum Mean Dis- crepancy (MMD) with the second order polynomial kernel. Thus, we argue that the essence of neu- ral style transfer is to match the feature distribu- tions between the style images and the generated images. To further support our standpoint, we ex- periment with several other distribution alignment methods, and achieve appealing results. We believe this novel interpretation connects these two impor- tant research ï¬ elds, and could enlighten future re- searches.
1701.01036#0
1701.01036#2
1701.01036
[ "1603.01768" ]
1701.01036#2
Demystifying Neural Style Transfer
why Gram matrix can represent artistic style still remains a mystery. In this paper, we propose a novel interpretation of neu- ral style transfer by casting it as a special domain adapta- tion [Beijbom, 2012; Patel et al., 2015] problem. We theo- retically prove that matching the Gram matrices of the neural activations can be seen as minimizing a speciï¬ c Maximum Mean Discrepancy (MMD) [Gretton et al., 2012a]. This re- veals that neural style transfer is intrinsically a process of dis- tribution alignment of the neural activations between images. Based on this illuminating analysis, we also experiment with other distribution alignment methods, including MMD with different kernels and a simpliï¬ ed moment matching method. These methods achieve diverse but all reasonable style trans- fer results.
1701.01036#1
1701.01036#3
1701.01036
[ "1603.01768" ]
1701.01036#3
Demystifying Neural Style Transfer
Speciï¬ cally, a transfer method by MMD with lin- ear kernel achieves comparable visual results yet with a lower complexity. Thus, the second order interaction in Gram ma- trix is not a must for style transfer. Our interpretation pro- vides a promising direction to design style transfer methods with different visual results. To summarize, our contributions are shown as follows: 1 Introduction Transferring the style from one image to another image is an interesting yet difï¬ cult problem. There have been many efforts to develop efï¬ cient methods for automatic style transfer [Hertzmann et al., 2001; Efros and Freeman, 2001; Efros and Leung, 1999; Shih et al., 2014; Kwatra et al., 2005]. Recently, Gatys et al. proposed a seminal work [Gatys et al., 2016]: It captures the style of artistic images and transfer it to other images using Convolutional Neural Net- works (CNN). This work formulated the problem as ï¬ nd- ing an image that matching both the content and style statis- tics based on the neural activations of each layer in CNN. It achieved impressive results and several follow-up works im- proved upon this innovative approaches [Johnson et al., 2016; Ulyanov et al., 2016; Ruder et al., 2016; Ledig et al., 2016]. Despite the fact that this work has drawn lots of attention, the fundamental element of style representation: the Gram ma- trix in [Gatys et al., 2016] is not fully explained.
1701.01036#2
1701.01036#4
1701.01036
[ "1603.01768" ]
1701.01036#4
Demystifying Neural Style Transfer
The reason 1. First, we demonstrate that matching Gram matrices in neural style transfer [Gatys et al., 2016] can be reformu- lated as minimizing MMD with the second order poly- nomial kernel. 2. Second, we extend the original neural style transfer with different distribution alignment methods based on our novel interpretation. 2 Related Work In this section, we brieï¬ y review some closely related works and the key concept MMD in our interpretation. Style Transfer Style transfer is an active topic in both academia and industry. Traditional methods mainly focus on the non-parametric patch-based texture synthesis and transfer, which resamples pixels or patches from the original source texture images [Hertzmann et al., 2001; Efros and Freeman, 2001; Efros and Leung, 1999; Liang et al., 2001]. Different methods were proposed to improve the quality of the patch- based synthesis and constrain the structure of the target im- age. For example, the image quilting algorithm based on dynamic programming was proposed to ï¬ nd optimal texture
1701.01036#3
1701.01036#5
1701.01036
[ "1603.01768" ]
1701.01036#5
Demystifying Neural Style Transfer
â Corresponding author boundaries in [Efros and Freeman, 2001]. A Markov Random Field (MRF) was exploited to preserve global texture struc- tures in [Frigo et al., 2016]. However, these non-parametric methods suffer from a fundamental limitation that they only use the low-level features of the images for transfer. Recently, neural style transfer [Gatys et al., 2016] has It demonstrated remarkable results for image stylization. fully takes the advantage of the powerful representation of Deep Convolutional Neural Networks (CNN). This method used Gram matrices of the neural activations from different layers of a CNN to represent the artistic style of a image. Then it used an iterative optimization method to generate a new image from white noise by matching the neural activa- tions with the content image and the Gram matrices with the style image. This novel technique attracts many follow-up works for different aspects of improvements and applications. To speed up the iterative optimization process in [Gatys et al., 2016], Johnson et al. [Johnson et al., 2016] and Ulyanov et al. [Ulyanov et al., 2016] trained a feed-forward generative network for fast neural style transfer. To improve the trans- fer results in [Gatys et al., 2016], different complementary schemes are proposed, including spatial constraints [Selim et al., 2016], semantic guidance [Champandard, 2016] and Markov Random Field (MRF) prior [Li and Wand, 2016]. There are also some extension works to apply neural style transfer to other applications. Ruder et al. [Ruder et al., 2016] incorporated temporal consistence terms by penaliz- ing deviations between frames for video style transfer. Selim et al. [Selim et al., 2016] proposed novel spatial constraints through gain map for portrait painting transfer. Although these methods further improve over the original neural style transfer, they all ignore the fundamental question in neural style transfer: Why could the Gram matrices represent the artistic style? This vagueness of the understanding limits the further research on the neural style transfer. Domain Adaptation Domain adaptation belongs to the area of transfer learning [Pan and Yang, 2010]. It aims to transfer the model that is learned on the source domain to the unlabeled target domain.
1701.01036#4
1701.01036#6
1701.01036
[ "1603.01768" ]
1701.01036#6
Demystifying Neural Style Transfer
The key component of domain adaptation is to measure and minimize the difference between source and target distributions. The most common discrep- ancy metric is Maximum Mean Discrepancy (MMD) [Gret- ton et al., 2012a], which measure the difference of sample mean in a Reproducing Kernel Hilbert Space. It is a popu- lar choice in domain adaptation works [Tzeng et al., 2014; Long et al., 2015; Long et al., 2016]. Besides MMD, Sun et al. [Sun et al., 2016] aligned the second order statistics by whitening the data in source domain and then re-correlating to the target domain. In [Li et al., 2017], Li et al. proposed a parameter-free deep adaptation method by simply modulating the statistics in all Batch Normalization (BN) layers. Maximum Mean Discrepancy Suppose there are two sets of samples X = {xi}n i=1 and Y = {yj}m j=1 where xi and yj are generated from distributions p and q, respectively. Maxi- mum Mean Discrepancy (MMD) is a popular test statistic for the two-sample testing problem, where acceptance or rejec- tion decisions are made for a null hypothesis p = q [Gretton
1701.01036#5
1701.01036#7
1701.01036
[ "1603.01768" ]
1701.01036#7
Demystifying Neural Style Transfer
et al., 2012a]. Since the population MMD vanishes if and only p = q, the MMD statistic can be used to measure the difference between two distributions. Speciï¬ cally, we calcu- lates MMD deï¬ ned by the difference between the mean em- bedding on the two sets of samples. Formally, the squared MMD is deï¬ ned as: MMD?[X, Y] = ||E.[4(x)] - E,[4(y)]I)? I >> 608) - = 6) I" i=1 j=l non mom () = SY 060)" O(xir) + 5 5 ow)" o(y;) i=1 i/=1 j=l j/=1 nom = Sy (xi) T o(yi)s i=1 j=l where ¢(-) is the explicit feature mapping function of MMD. Applying the associated kernel function k(x,y) = (o(x), o(y)), the Eq. |1 [ijcan be expressed in the form of ker- nel: MMD°[X, Y] non mom = SEE Xi, Xi) a PL yjâ ) i=1/=1 =1 (2) Fm Ms yj): The kernel function k(·, ·) implicitly deï¬
1701.01036#6
1701.01036#8
1701.01036
[ "1603.01768" ]
1701.01036#8
Demystifying Neural Style Transfer
nes a mapping to a higher dimensional feature space. 3 Understanding Neural Style Transfer In this section, we ï¬ rst theoretically demonstrate that match- ing Gram matrices is equivalent to minimizing a speciï¬ c form of MMD. Then based on this interpretation, we extend the original neural style transfer with different distribution align- ment methods. Before explaining our observation, we ï¬ rst brieï¬ y re- view the original neural style transfer approach [Gatys et al., 2016]. The goal of style transfer is to generate a stylized im- age xâ given a content image xc and a reference style im- age xs. The feature maps of xâ , xc and xs in the layer l of a CNN are denoted by Fl â RNlà Ml , Pl â RNlà Ml and Sl â RNlà Ml respectively, where Nl is the number of the feature maps in the layer l and Ml is the height times the width of the feature map. In [Gatys et al., 2016], neural style transfer iteratively gen- erates xâ by optimizing a content loss and a style loss: L = αLcontent + βLstyle, (3) where α and β are the weights for content and style losses, Lcontent is deï¬ ned by the squared error between the feature maps of a speciï¬ c layer l for xâ and xc: Ni M Leontent = >> VE *, (4) 2a (2) and Lstyle is the sum of several style loss Ll layers: style in different Lstyle = wlLl style, (5) l where wl is the weight of the loss in the layer l and Ll style is deï¬
1701.01036#7
1701.01036#9
1701.01036
[ "1603.01768" ]
1701.01036#9
Demystifying Neural Style Transfer
ned by the squared error between the features correlations expressed by Gram matrices of xâ and xs: NN Ltyle = INTE > SiG; - > (6) i=1 j=l where the Gram matrix Gl â RNlà Nl is the inner product between the vectorized feature maps of xâ in layer l: M = FF ie: (7) k=1 and similarly Al is the Gram matrix corresponding to Sl. 3.1 Reformulation of the Style Loss In this section, we reformulated the style loss Lstyle in Eq. 6. By expanding the Gram matrix in Eq. 6, we can get the for- mulation of Eq. 8, where f l ·k is the k-th column of Fl ·k and sl and Sl. By using the second order degree polynomial kernel k(x, y) = (xT y)2, Eq. 8 can be represented as: M M Cote = nae » » GG ky=1k2=1 + k(s!p, .8!.,) â 2e(E4, s',,)) ) l whey? a) L 27zl ol IN? MMD*|F', Sâ ], where F l is the feature set of xâ where each sample is a col- umn of Fl, and S l corresponds to the style image xs. In this way, the activations at each position of feature maps is con- sidered as an individual sample. Consequently, the style loss ignores the positions of the features, which is desired for style transfer.
1701.01036#8
1701.01036#10
1701.01036
[ "1603.01768" ]
1701.01036#10
Demystifying Neural Style Transfer
In conclusion, the above reformulations suggest two important ï¬ ndings: 1. The style of a image can be intrinsically represented by feature distributions in different layers of a CNN. 2. The style transfer can be seen as a distribution alignment process from the content image to the style image. # 3.2 Different Adaptation Methods for Neural Style Transfer Our interpretation reveals that neural style transfer can be seen as a problem of distribution alignment, which is also at the core in domain adaptation. If we consider the style of one image in a certain layer of CNN as a â
1701.01036#9
1701.01036#11
1701.01036
[ "1603.01768" ]
1701.01036#11
Demystifying Neural Style Transfer
domainâ , style trans- fer can also be seen as a special domain adaptation problem. The specialty of this problem lies in that we treat the feature at each position of feature map as one individual data sam- ple, instead of that in traditional domain adaptation problem in which we treat each image as one data sample. (e.g. The feature map of the last convolutional layer in VGG-19 model is of size 14 Ã 14, then we have totally 196 samples in this â
1701.01036#10
1701.01036#12
1701.01036
[ "1603.01768" ]
1701.01036#12
Demystifying Neural Style Transfer
domainâ .) Inspired by the studies of domain adaptation, we extend neural style transfer with different adaptation methods in this subsection. MMD with Different Kernel Functions As shown in Eq. 9, matching Gram matrices in neural style transfer can been seen as a MMD process with second order polynomial kernel. It is very natural to apply other kernel functions for MMD in style transfer. First, if using MMD statistics to mea- sure the style discrepancy, the style loss can be deï¬
1701.01036#11
1701.01036#13
1701.01036
[ "1603.01768" ]
1701.01036#13
Demystifying Neural Style Transfer
ned as: 1 Live = A =,MMD?|F', S'], MM A> (« (£1,.£,) + k(s',,s!;) = 2k(f!,.8!,)), k j=1 j=l (10) (10) where Z}. is the normalization term corresponding to differ- ent scale of the feature map in the layer / and the choice of kernel function. Theoretically, different kernel function im- plicitly maps features to different higher dimensional space. Thus, we believe that different kernel functions should cap- ture different aspects of a style. We adopt the following three popular kernel functions in our experiments: (1) Linear kernel: k(x, y) = x7 y; (2) Polynomial kernel: k(x, y) = (x? y + c)¢; (3) Gaussian kernel: k(x, y) = exp (â xy), For polynomial kernel, we only use the version with d = 2. Note that matching Gram matrices is equivalent to the poly- nomial kernel with c = 0 and d = 2. For the Gaussian ker- nel, we adopt the unbiased estimation of MMD al., 2012b], which samples MM, pairs in Eq. [10] and thus can be computed with linear complexity. BN Statistics Matching In [Li et al., 2017], the authors found that the statistics (i.e. mean and variance) of Batch Normalization (BN) layers contains the traits of different do- mains. Inspired by this observation, they utilized separate BN statistics for different domain. This simple operation aligns the different domain distributions effectively. As a special domain adaptation problem, we believe that BN statistics of a certain layer can also represent the style. Thus, we con- struct another style loss by aligning the BN statistics (mean and standard deviation) of two feature maps between two im- ages:
1701.01036#12
1701.01036#14
1701.01036
[ "1603.01768" ]
1701.01036#14
Demystifying Neural Style Transfer
Ni 1 . . . . Cage = ye D ((eee HS)? + (ips â 51)?), i=1 where µi F l is the mean and standard deviation of the i-th feature channel among all the positions of the feature map NM M M ci = tote = aypare a BP ys =1 j=1 k=1 N,N, M M ~ GNP M2 D> (( Do Fak)â + i=1 j=l k=l No NM Mm (FY Fle, F,. F! ~ GNP M2 ky Fike Fike + i=1 j=l ky=1kg=1 M M NN = 4NP MP > } » ; > ee Fin, Fey Fig ky=1k9=1 i=1 j=1 M, M, =p Fir, Fle wae D(C via Fiky) Ul gy=1kg=1 © i=1 M, M, AN2\Mf2 © AN; M, ky=1ko=1 (D2 Six Six) M M = 2 PAF) Sh. Six)) k=1 Stk, Sper Sike Sjky â 2Fiks Fins Sika Spk2) L al L L iL + Shy Sjey Sika Sky â 2Fiey Eyer Site Spo) +(osins Sika)? â 2( (Fa ha *) S S (fe â fes)? + (s'n, stig)? â (Fh, Sky ),
1701.01036#13
1701.01036#15
1701.01036
[ "1603.01768" ]
1701.01036#15
Demystifying Neural Style Transfer
k1=1 k2=1 # in the layer l for image xâ : M 1 1 Hit = 7? Fi, orn â = ap LR j=l = pi)â , (12) Sl correspond to the style image xs. The aforementioned style loss functions are all differen- tiable and thus the style matching problem can be solved by back propagation iteratively. # 4 Results In this section, we brieï¬ y introduce some implementation de- tails and present results by our extended neural style transfer methods. Furthermore, we also show the results of fusing dif- ferent neural style transfer methods, which combine different style losses. In the following, we refer the four extended style transfer methods introduced in Sec. 3.2 as linear, poly, Gaus- sian and BN, respectively. The images in the experiments are collected from the public implementations of neural style transfer123. Implementation Details In the implementation, we use the VGG-19 network [Simonyan and Zisserman, 2015] fol- lowing the choice in [Gatys et al., 2016]. We also adopt the relu4 2 layer for the content loss, and relu1 1, relu2 1, relu3 1, relu4 1, relu5 1 for the style loss.
1701.01036#14
1701.01036#16
1701.01036
[ "1603.01768" ]
1701.01036#16
Demystifying Neural Style Transfer
The default weight factor wl is set as 1.0 if it is not speciï¬ ed. The target image xâ is initialized randomly and optimized iteratively until the rela- tive change between successive iterations is under 0.5%. The maximum number of iterations is set as 1000. For the method with Gaussian kernel MMD, the kernel bandwidth Ï 2 is ï¬ xed as the mean of squared l2 distances of the sampled pairs since 1https://github.com/dmlc/mxnet/tree/master/example/neural- style
1701.01036#15
1701.01036#17
1701.01036
[ "1603.01768" ]
1701.01036#17
Demystifying Neural Style Transfer
it does not affect a lot on the visual results. Our implemen- tation is based on the MXNet [Chen et al., 2016] implemen- tation1 which reproduces the results of original neural style transfer [Gatys et al., 2016]. Since the scales of the gradients of the style loss differ for different methods, and the weights a and ( in Eq. [3] affect the results of style transfer, we fix some factors to make a fair comparison. Specifically, we set a = 1 because the content losses are the same among different methods. Then, for each method, we first manually select a proper 6â such that the gradients on the x* from the style loss are of the same order of magnitudes as those from the content loss. Thus, we can manipulate a balance factor 7 (8 = 7â ) to make trade-off between the content and style matching. # 4.1 Different Style Representations q Layer 5 Style Image Layer 1 Layer 2 Layer 3 Layer 4 Figure 1: Style reconstructions of different methods in ï¬ ve layers, respectively. Each row corresponds to one method and the recon- struction results are obtained by only using the style loss Lstyle with α = 0. We also reconstruct different style representations in differ- ent subsets of layers of VGG network. For example, layer 3 con- tains the style loss of the ï¬ rst 3 layers (w1 = w2 = w3 = 1.0 and w4 = w5 = 0.0). # 2https://github.com/jcjohnson/neural-style 3https://github.com/jcjohnson/fast-neural-style To validate that the extended neural style transfer meth- ods can capture the style representation of an artistic image,
1701.01036#16
1701.01036#18
1701.01036
[ "1603.01768" ]
1701.01036#18
Demystifying Neural Style Transfer
(8) (a) Content / Style (b) γ = 0.1 (c) γ = 0.2 (d) γ = 1.0 (e) γ = 5.0 (f) γ = 10.0 Figure 2: Results of the four methods (linear, poly, Gaussian and BN) with different balance factor γ. Larger γ means more emphasis on the style loss.
1701.01036#17
1701.01036#19
1701.01036
[ "1603.01768" ]
1701.01036#19
Demystifying Neural Style Transfer
we ï¬ rst visualize the style reconstruction results of different methods only using the style loss in Fig. 1. Moreover, Fig. 1 also compares the style representations of different layers. On one hand, for a speciï¬ c method (one row), the results show that different layers capture different levels of style: The tex- tures in the top layers usually has larger granularity than those in the bottom layers. This is reasonable because each neuron in the top layers has larger receptive ï¬ eld and thus has the ability to capture more global textures. On the other hand, for a speciï¬ c layer, Fig. 1 also demonstrates that the style captured by different methods differs. For example, in top layers, the textures captured by MMD with a linear kernel are composed by thick strokes. Contrarily, the textures captured by MMD with a polynomial kernel are more ï¬ ne grained.
1701.01036#18
1701.01036#20
1701.01036
[ "1603.01768" ]
1701.01036#20
Demystifying Neural Style Transfer
# 4.2 Result Comparisons Effect of the Balance Factor We ï¬ rst explore the effect of the balance factor between the content loss and style loss by varying the weight γ. Fig. 2 shows the results of four trans- fer methods with various γ from 0.1 to 10.0. As intended, the global color information in the style image is successfully transfered to the content image, and the results with smaller γ preserve more content details as shown in Fig. 2(b) and Fig. 2(c). When γ becomes larger, more stylized textures are incorporated into the results. For example, Fig. 2(e) and Fig. 2(f) have much more similar illumination and textures with the style image, while Fig. 2(d) shows a balanced result between the content and style. Thus, users can make trade-off between the content and the style by varying γ. (a) Content / Style (b) linear (c) poly (d) sian Gaus- (e) BN Figure 3: Visual results of several style transfer methods, includ- ing linear, poly, Gaussian and BN. The balance factors γ in the six examples are 2.0, 2.0, 2.0, 5.0, 5.0 and 5.0, respectively. (a) Content / Style (b) (0.9, 0.1) (c) (0.7, 0.3) (d) (0.5, 0.5) (e) (0.3, 0.7) (f) (0.1, 0.9) Figure 4: Results of two fusion methods: BN + poly and linear + Gaussian. The top two rows are the results of ï¬ rst fusion method and the bottom two rows correspond to the second one. Each column shows the results of a balance weight between the two methods. γ is set as 5.0. Comparisons of Different Transfer Methods Fig. 3 presents the results of various pairs of content and style im- ages with different transfer methods4. Similar to matching Gram matrices, which is equivalent to the poly method, the other three methods can also transfer satisï¬ ed styles from the speciï¬ ed style images.
1701.01036#19
1701.01036#21
1701.01036
[ "1603.01768" ]
1701.01036#21
Demystifying Neural Style Transfer
This empirically demonstrates the cor- rectness of our interpretation of neural style transfer: Style transfer is essentially a domain adaptation problem, which aligns the feature distributions. Particularly, when the weight on the style loss becomes higher (namely, larger γ), the dif- ferences among the four methods are getting larger. This indicates that these methods implicitly capture different as- pects of style, which has also been shown in Fig. 1. Since these methods have their unique properties, they could pro- vide more choices for users to stylize the content image. For example, linear achieves comparable results with other meth- ods, yet requires lower computation complexity. Fusion of Different Neural Style Transfer Methods Since we have several different neural style transfer methods, we propose to combine them to produce new transfer results. Fig. 4 demonstrates the fusion results of two combinations (linear + Gaussian and poly + BN). Each row presents the results with different balance between the two methods. For example, Fig. 4(b) in the ï¬ rst two rows emphasize more on BN and Fig. 4(f) emphasizes more on poly. The results in the middle columns show the interpolation between these two methods. We can see that the styles of different methods are blended well using our method. 5 Conclusion Despite the great success of neural style transfer, the ratio- nale behind neural style transfer was far from crystal.
1701.01036#20
1701.01036#22
1701.01036
[ "1603.01768" ]
1701.01036#22
Demystifying Neural Style Transfer
The vital â trickâ for style transfer is to match the Gram matrices of the features in a layer of a CNN. Nevertheless, subsequent literatures about neural style transfer just directly improves upon it without investigating it in depth. In this paper, we present a timely explanation and interpretation for it. First, we theoretically prove that matching the Gram matrices is equivalent to a speciï¬ c Maximum Mean Discrepancy (MMD) process. Thus, the style information in neural style transfer is intrinsically represented by the distributions of activations in a CNN, and the style transfer can be achieved by distribu- tion alignment. Moreover, we exploit several other distribu- tion alignment methods, and ï¬ nd that these methods all yield promising transfer results. Thus, we justify the claim that neural style transfer is essentially a special domain adapta- tion problem both theoretically and empirically. We believe this interpretation provide a new lens to re-examine the style transfer problem, and will inspire more exciting works in this research area.
1701.01036#21
1701.01036#23
1701.01036
[ "1603.01768" ]
1701.01036#23
Demystifying Neural Style Transfer
4More results can be found at http://www.icst.pku.edu.cn/struct/Projects/mmdstyle/result- 1000/show-full.html Acknowledgement This work was supported by the National Natural Science Foundation of China under Contract 61472011. # References [Beijbom, 2012] Oscar Beijbom. for computer vision applications. arXiv:1211.4860, 2012. Domain adaptations arXiv preprint [Champandard, 2016] Alex J Champandard.
1701.01036#22
1701.01036#24
1701.01036
[ "1603.01768" ]
1701.01036#24
Demystifying Neural Style Transfer
Semantic style transfer and turning two-bit doodles into ï¬ ne artworks. arXiv preprint arXiv:1603.01768, 2016. [Chen et al., 2016] Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A ï¬ exible and efï¬ cient machine learning library for heterogeneous distributed systems. NIPS Workshop on Machine Learn- ing Systems, 2016. [Efros and Freeman, 2001] Alexei A Efros and William T Freeman. Image quilting for texture synthesis and transfer. In SIGGRAPH, 2001. [Efros and Leung, 1999] Alexei A Efros and Thomas K Le- ung. Texture synthesis by non-parametric sampling. In ICCV, 1999. [Frigo et al., 2016] Oriel Frigo, Neus Sabater, Julie Delon, and Pierre Hellier. Split and match: Example-based adap- tive patch sampling for unsupervised style transfer. In CVPR, 2016. [Gatys et al., 2016] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016. [Gretton et al., 2012a] Arthur Gretton, Karsten M Borg- wardt, Malte J Rasch, Bernhard Sch¨olkopf, and Alexander Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723â 773, 2012. [Gretton et al., 2012b] Arthur Gretton, Dino Sejdinovic, Heiko Strathmann, Sivaraman Balakrishnan, Massimil- iano Pontil, Kenji Fukumizu, and Bharath K Sriperum- budur.
1701.01036#23
1701.01036#25
1701.01036
[ "1603.01768" ]
1701.01036#25
Demystifying Neural Style Transfer
Optimal kernel choice for large-scale two-sample tests. In NIPS, 2012. [Hertzmann et al., 2001] Aaron Hertzmann, Charles E Ja- cobs, Nuria Oliver, Brian Curless, and David H Salesin. Image analogies. In SIGGRAPH, 2001. [Johnson et al., 2016] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016. [Kwatra et al., 2005] Vivek Kwatra, Irfan Essa, Aaron Bo- bick, and Nipun Kwatra.
1701.01036#24
1701.01036#26
1701.01036
[ "1603.01768" ]
1701.01036#26
Demystifying Neural Style Transfer
Texture optimization for example-based synthesis. ACM Transactions on Graph- ics, 24(3):795â 802, 2005. [Ledig et al., 2016] Christian Ledig, Lucas Theis, Ferenc Husz´ar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single im- age super-resolution using a generative adversarial net- work. arXiv preprint arXiv:1609.04802, 2016. [Li and Wand, 2016] Chuan Li and Michael Wand. Combin- ing Markov random ï¬ elds and convolutional neural net- works for image synthesis. In CVPR, 2016. [Li et al., 2017] Yanghao Li, Naiyan Wang, Jianping Shi, Ji- aying Liu, and Xiaodi Hou. Revisiting batch normalization for practical domain adaptation. ICLRW, 2017. [Liang et al., 2001] Lin Liang, Ce Liu, Ying-Qing Xu, Bain- ing Guo, and Heung-Yeung Shum.
1701.01036#25
1701.01036#27
1701.01036
[ "1603.01768" ]
1701.01036#27
Demystifying Neural Style Transfer
Real-time texture syn- thesis by patch-based sampling. ACM Transactions on Graphics, 20(3):127â 150, 2001. Jianmin Wang, and Michael I Jordan. Learning transferable fea- tures with deep adaptation networks. In ICML, 2015. [Long et al., 2016] Mingsheng Long, Jianmin Wang, and Michael I Jordan. Unsupervised domain adaptation with residual transfer networks. In NIPS, 2016. [Pan and Yang, 2010] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on Knowl- edge and Data Engineering, 22(10):1345â 1359, 2010. [Patel et al., 2015] Vishal M Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adapta- tion: A survey of recent advances. IEEE Signal Processing Magazine, 32(3):53â 69, 2015. [Ruder et al., 2016] Manuel Ruder, Alexey Dosovitskiy, and Thomas Brox.
1701.01036#26
1701.01036#28
1701.01036
[ "1603.01768" ]
1701.01036#28
Demystifying Neural Style Transfer
Artistic style transfer for videos. In GCPR, 2016. [Selim et al., 2016] Ahmed Selim, Mohamed Elgharib, and Linda Doyle. Painting style transfer for head portraits us- ing convolutional neural networks. ACM Transactions on Graphics, 35(4):129, 2016. [Shih et al., 2014] YiChang Shih, Sylvain Paris, Connelly Barnes, William T Freeman, and Fr´edo Durand.
1701.01036#27
1701.01036#29
1701.01036
[ "1603.01768" ]
1701.01036#29
Demystifying Neural Style Transfer
Style transfer for headshot portraits. ACM Transactions on Graphics, 33(4):148, 2014. [Simonyan and Zisserman, 2015] Karen Simonyan and An- drew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adaptation. AAAI, 2016. [Tzeng et al., 2014] Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confu- sion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014. [Ulyanov et al., 2016] Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky.
1701.01036#28
1701.01036#30
1701.01036
[ "1603.01768" ]
1701.01036#30
Demystifying Neural Style Transfer
Texture networks: Feed-forward synthesis of textures and stylized images. In ICML, 2016.
1701.01036#29
1701.01036
[ "1603.01768" ]
1701.00299#0
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
8 1 0 2 r a M 5 ] G L . s c [ 3 v 9 9 2 0 0 . 1 0 7 1 : v i X r a # Dynamic Deep Neural Networks: Optimizing Accuracy-Efï¬ ciency Trade-offs by Selective Execution # Lanlan Liu [email protected] Jia Deng [email protected] # University of Michigan 2260 Hayward St, Ann Arbor, MI, 48105, USA
1701.00299#1
1701.00299
[ "1511.06297" ]
1701.00299#1
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
# Abstract We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward deep neural network that allows selective execution. Given an input, only a subset of D2NN neurons are executed, and the particular subset is deter- mined by the D2NN itself. By pruning unnecessary com- putation depending on input, D2NNs provide a way to im- prove computational efï¬ ciency. To achieve dynamic selec- tive execution, a D2NN augments a feed-forward deep neu- ral network (directed acyclic graph of differentiable mod- ules) with controller modules. Each controller module is a sub-network whose output is a decision that controls whether other modules can execute. A D2NN is trained end to end. Both regular and controller modules in a D2NN are learnable and are jointly trained to optimize both ac- curacy and efï¬
1701.00299#0
1701.00299#2
1701.00299
[ "1511.06297" ]
1701.00299#2
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
ciency. Such training is achieved by inte- grating backpropagation with reinforcement learning. With extensive experiments of various D2NN architectures on im- age classiï¬ cation tasks, we demonstrate that D2NNs are general and ï¬ exible, and can effectively optimize accuracy- efï¬ ciency trade-offs. network whose output is a decision that controls whether other modules can execute. Fig. 1 (left) illustrates a simple D2NN with one control module (Q) and two regular mod- ules (N1, N2), where the controller Q outputs a binary de- cision on whether module N2 executes. For certain inputs, the controller may decide that N2 is unnecessary and in- stead execute a dummy node D to save on computation. As an example application, this D2NN can be used for binary classiï¬ cation of images, where some images can be rapidly classiï¬ ed as negative after only a small amount of compu- tation. D2NNs are motivated by the need for computational ef- ï¬ ciency, in particular, by the need to deploy deep networks on mobile devices and data centers. Mobile devices are con- strained by energy and power, limiting the amount of com- putation that can be executed. Data centers need energy efï¬ ciency to scale to higher throughput and to save operat- ing cost. D2NNs provide a way to improve computational efï¬ ciency by selective execution, pruning unnecessary com- putation depending on input. D2NNs also make it possible to use a bigger network under a computation budget by ex- ecuting only a subset of the neurons each time.
1701.00299#1
1701.00299#3
1701.00299
[ "1511.06297" ]
1701.00299#3
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
# 1. Introduction This paper introduces Dynamic Deep Neural Networks (D2NN), a new type of feed-forward deep neural network (DNN) that allows selective execution. That is, given an input, only a subset of neurons are executed, and the partic- ular subset is determined by the network itself based on the particular input. In other words, the amount of computa- tion and computation sequence are dynamic based on input. This is different from standard feed-forward networks that always execute the same computation sequence regardless of input. A D2NN is a feed-forward deep neural network (directed acyclic graph of differentiable modules) augmented with one or more control modules. A control module is a sub- A D2NN is trained end to end. That is, regular modules and control modules are jointly trained to optimize both ac- curacy and efï¬
1701.00299#2
1701.00299#4
1701.00299
[ "1511.06297" ]
1701.00299#4
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
ciency. We achieve such training by integrat- ing backpropagation with reinforcement learning, necessi- tated by the non-differentiability of control modules. Compared to prior work that optimizes computational ef- ï¬ ciency in computer vision and machine learning, our work is distinctive in four aspects: (1) the decisions on selective execution are part of the network inference and are learned end to end together with the rest of the network, as op- posed to hand-designed or separately learned [23, 29, 2]; (2) D2NNs allow more ï¬ exible network architectures and execution sequences including parallel paths, as opposed to architectures with less variance [12, 27]; (3) our D2NNs di- rectly optimize arbitrary efï¬ ciency metric that is deï¬ ned by the user, while previous work has no such ï¬ exibility be- cause they improve efï¬ ciency indirectly through sparsity
1701.00299#3
1701.00299#5
1701.00299
[ "1511.06297" ]
1701.00299#5
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
1 N2 }>LN4 (NG 4 N8 N3_ }>N5 }(_N7 Figure 1. Two D2NN examples. Input and output nodes are drawn as circles with the output nodes shaded. Function nodes are drawn as rectangles (regular nodes) or diamonds (control nodes). Dummy nodes are shaded. Data edges are drawn as solid arrows and control edges as dashed arrows. A data edge with a user deï¬ ned default value is decorated with a circle. constraints[5, 7, 27].
1701.00299#4
1701.00299#6
1701.00299
[ "1511.06297" ]
1701.00299#6
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
(4) our method optimizes metrics such as the F-score that does not decompose over individual ex- amples. This is an issue not addressed in prior work. We will elaborate on these differences in the Related Work sec- tion of this paper. We perform extensive experiments to validate our D2NNs algorithms. We evaluate various D2NN architec- tures on several tasks. They demonstrate that D2NNs are general, ï¬ exible, and can effectively improve computational efï¬ ciency. Our main contribution is the D2NN framework that al- lows a user to augment a static feed-forward network with control modules to achieve dynamic selective execution. We show that D2NNs allow a wide variety of topologies while sharing a uniï¬ ed training algorithm. To our knowl- edge, D2NN is the ï¬ rst single framework that can support various qualitatively different efï¬ cient network designs, in- cluding cascade designs and coarse-to-ï¬ ne designs. Our D2NN framework thus provides a new tool for designing and training computationally efï¬ cient neural network mod- els. # 2. Related work Input-dependent execution has been widely used in com- puter vision, from cascaded detectors [31, 15] to hierarchi- cal classiï¬
1701.00299#5
1701.00299#7
1701.00299
[ "1511.06297" ]
1701.00299#7
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
cation [10, 6]. The key difference of our work from prior work is that we jointly learn both visual features and control decisions end to end, whereas prior work either hand-designs features and control decisions (e.g. threshold- ing), or learns them separately. In the context of deep networks, two lines of prior work have attempted to improve computational efï¬ ciency. One line of work tries to eliminate redundancy in data or com- putation in a way that is input-independent. The methods include pruning networks [18, 32, 3], approximating layers with simpler functions [13, 33], and using number represen- tations of limited precision [8, 17]. The other line of work exploits the fact that not all inputs require the same amount of computation, and explores input-dependent execution of DNNs. Our work belongs to the second line, and we will In fact, our input- contrast our work mainly with them. dependent D2NN can be combined with input-independent methods to achieve even better efï¬
1701.00299#6
1701.00299#8
1701.00299
[ "1511.06297" ]
1701.00299#8
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
ciency. Among methods leveraging input-dependent execution, some use pre-deï¬ ned execution-control policies. For ex- ample, cascade methods [23, 29] rely on manually-selected thresholds to control execution; Dynamic Capacity Net- work [2] designs a way to directly calculate a saliency map for execution control. Our D2NNs, instead, are fully learn- able; the execution-control policies of D2NNs do not re- quire manual design and are learned together with the rest of the network. Our work is closely related to conditional computation methods [5, 7, 27], which activate part of a network de- pending on input. They learn policies to encourage sparse neural activations[5] or sparse expert networks[27]. Our work differs from these methods in several ways. First, our control policies are learned to directly optimize arbitrary user-deï¬ ned global performance metrics, whereas condi- tional computation methods have only learned policies that encourage sparsity. In addition, D2NNs allow more ï¬ exi- ble control topologies. For example, in [5], a neuron (or block of neurons) is the unit controllee of their control poli- cies; in [27], an expert is the unit controllee.
1701.00299#7
1701.00299#9
1701.00299
[ "1511.06297" ]
1701.00299#9
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Compared to their ï¬ xed types of controllees, our control modules can be added in any point of the network and control arbitrary sub- networks. Also, various policy parametrization can be used in the same D2NN framework. We show a variety of param- eterizations (as different controller networks) in our D2NN examples, whereas previous conditional computation works have used some ï¬ xed formats: For example, control poli- cies are parametrized as the sigmoid or softmax of an afï¬ ne transformation of neurons or inputs [5, 27]. Our work is also related to attention models [11, 25, 16]. Note that attention models can be categorized as hard at- tention [25, 4, 2] versus soft [16, 28]. Hard attention mod- els only process the salient parts and discard others (e.g. processing only a subset of image subwindows); in con- trast, soft attention models process all parts but up-weight the salient parts. Thus only hard attention models perform input-dependent execution as D2NNs do. However, hard attention models differ from D2NNs because hard atten- tion models have typically involved only one attention mod- ule whereas D2NNs can have multiple attention (controller) modules â conventional hard attention models are â
1701.00299#8
1701.00299#10
1701.00299
[ "1511.06297" ]
1701.00299#10
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
single- threadedâ whereas D2NN can be â multi-threadedâ . In addi- tion, prior work in hard attention models have not directly optimized for accuracy-efï¬ ciency trade-offs. It is also worth noting that many mixture-of-experts methods [20, 21, 14] also involve soft attention by soft gating experts: they pro- cess all experts but only up-weight useful experts, thus sav- ing no computation. D2NNs also bear some similarity to Deep Sequential Neural Networks (DSNN) [12] in terms of input-dependent execution. However, it is important to note that although DSNNsâ
1701.00299#9
1701.00299#11
1701.00299
[ "1511.06297" ]
1701.00299#11
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
structures can in principle be used to optimize accuracy-efï¬ ciency trade-offs, DSNNs are not for the task of improving efï¬ ciency and have no learning method pro- posed to optimize efï¬ ciency. And the method to effectively optimize for efï¬ ciency-accuracy trade-off is non-trivial as is shown in the following sections. Also, DSNNs are single- threaded: it always activates exactly one path in the com- putation graph, whereas for D2NNs it is possible to have multiple paths or even the entire graph activated.
1701.00299#10
1701.00299#12
1701.00299
[ "1511.06297" ]
1701.00299#12
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
# 3. Deï¬ nition and Semantics of D2NNs Here we precisely deï¬ ne a D2NN and describe its se- mantics, i.e. how a D2NN performs inference. D2NN deï¬ nition A D2NN is deï¬ ned as a directed acyclic graph (DAG) without duplicated edges. Each node can be one of the three types: input nodes, output nodes, and func- tion nodes. An input or output node represents an input or output of the network (e.g. a vector). A function node represents a (differentiable) function that maps a vector to another vector. Each edge can be one of the two types: data edges and control edges. A data edge represents a vector sent from one node to another, the same as in a conventional DNN. A control edge represents a control signal, a scalar, sent from one node to another. A data edge can optionally have a user-deï¬
1701.00299#11
1701.00299#13
1701.00299
[ "1511.06297" ]
1701.00299#13
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
ned â default valueâ , representing the out- put that will still be sent even if the function node does not execute. For simplicity, we have a few restrictions on valid D2NNs: (1) the outgoing edges from a node are either all data edges or all control edges (i.e. cannot be a mix of data edges and control edges); (2) if a node has an incoming con- trol edge, it cannot have an outgoing control edge. Note that these two simplicity constraints do not in any way restrict the expressiveness of a D2NN. For example, to achieve the effect of a node with a mix of outgoing data edges and con- trol edges, we can just feed its data output to a new node with outgoing control edges and let the new node be an identity function. We call a function node a control node if its outgoing edges are control edges. We call a function node a regular node if its outgoing edges are data edges. Note that it is possible for a function node to take no data input and output a constant value. We call such nodes â dummyâ nodes. We will see that the â default valuesâ and â dummyâ nodes can signiï¬ cantly extend the ï¬ exibility of D2NNs.
1701.00299#12
1701.00299#14
1701.00299
[ "1511.06297" ]
1701.00299#14
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Hereafter we may also call function nodes â subnetworkâ , or â modulesâ and will use these terms interchangeably. Fig. 1 illustrates simple D2NNs with all kinds of nodes and edges. D2NN Semantics Given a D2NN, we perform inference by traversing the graph starting from the input nodes. Because a D2NN is a DAG, we can execute each node in a topolog- ical order (the parents of a node are ordered before it; we take both data edges and control edges in consideration), same as conventional DNNs except that the control nodes can cause the computation of some nodes to be skipped. After we execute a control node, it outputs a set of con- trol scores, one for each of its outgoing control edges. The control edge with the highest score is â
1701.00299#13
1701.00299#15
1701.00299
[ "1511.06297" ]
1701.00299#15
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
activatedâ , mean- ing that the node being controlled is allowed to execute. The rest of the control edges are not activated, and their controllees are not allowed to execute. For example, in Fig 1 (right), the node Q controls N2 and N3. Either N2 or N3 will execute depending on which has the higher con- trol score. Although the main idea of the inference (skipping nodes) seems simple, due to D2NNsâ ï¬ exibility, the inference topology can be far more complicated. For example, in the case of a node with multiple incoming control edges (i.e. controlled by multiple controllers), it should execute if any of the control edges are activated. Also, when the execution of a node is skipped, its output will be either the default value or null. If the output is the default value, subsequent execution will continue as usual. If the output is null, any downstream nodes that depend on this output will in turn skip execution and have a null output unless a default value has been set.
1701.00299#14
1701.00299#16
1701.00299
[ "1511.06297" ]
1701.00299#16
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
This â nullâ effect will propagate to the rest of the graph. Fig. 1 (right) shows a slightly more complicated example with default values: if N2 skips execution and out- puts null, so will N4 and N6. But N8 will execute regardless because its input data edge has a default value. In our Ex- periments Section, we will demonstrate more sophisticated D2NNs. We can summarize the semantics of D2NNs as follows: a D2NN executes the same way as a conventional DNN ex- cept that there are control edges that can cause some nodes to be skipped. A control edge is active if and only if it has the highest score among all outgoing control edges from a node. A node is skipped if it has incoming control edges and none of them is active, or if one of its inputs is null. If a node is skipped, its output will be either null or a user- deï¬ ned default value. A null will cause downstream nodes to be skipped whereas a default value will not. A D2NN can also be thought of as a program with condi- tional statements. Each data edge is equivalent to a variable that is initialized to either a default value or null. Execut- ing a function node is equivalent to executing a command assigning the output of the function to the variable. A con- trol edge is equivalent to a boolean variable initialized to
1701.00299#15
1701.00299#17
1701.00299
[ "1511.06297" ]
1701.00299#17
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
False. A control node is equivalent to a â switch-caseâ state- ment that computes a score for each of the boolean variables and sets the one with the largest score to True. Checking the conditions to determine whether to execute a function is equivalent to enclosing the function with an â if-thenâ state- ment. A conventional DNN is a program with only func- tion calls and variable assignments without any conditional statements, whereas a D2NN introduces conditional state- ments with the conditions themselves generated by learn- able functions. # 4. D2NN Learning Due to the control nodes, a D2NN cannot be trained the same way as a conventional DNN. The output of the net- work cannot be expressed as a differentiable function of all trainable parameters, especially those in the control nodes. As a result, backpropagation cannot be directly applied.
1701.00299#16
1701.00299#18
1701.00299
[ "1511.06297" ]
1701.00299#18
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
The main difï¬ culty lies in the control nodes, whose out- puts are discretized into control decisions. This is similar to the situation with hard attention models [25, 4], which use reinforcement learning. Here we adopt the same general strategy. Learning a Single Control Node For simplicity of expo- sition we start with a special case where there is only one control node. We further assume that all parameters except those of this control node have been learned and ï¬ xed. That is, the goal is to learn the parameters of the control node to maximize a user-deï¬ ned reward, which in our case is a combination of accuracy and efï¬ ciency. This results in a classical reinforcement learning setting: learning a control policy to take actions so as to maximize reward. We base our learning method on Q-learning [26, 30]. We let each outgoing control edge represent an action, and let the con- trol node approximate the action-value (Q) function, which is the expected return of an action given the current state (the input to the control node). It is worth noting that unlike many prior works that use deep reinforcement learning, a D2NN is not recurrent. For each input to the network (e.g. an image), each control node only executes once. And the decisions of a control node completely depend on the current input. As a result, an ac- tion taken on one input has no effect on another input. That is, our reinforcement learning task consists of only one time step. Our one time-step reinforcement learning task can also be seen as a contextual bandit problem, where the context vector is the input to the control module, and the arms are the possible action outputs of the module. The one time- step setting simpliï¬ es our Q-learning objective to that of the following regression task: L = (Q(s, a) â r)2, (1) where r is a user-deï¬ ned reward, a is an action, s is the in- put to control node, and Q is computed by the control node. As we can see, training a control node here is the same as training a network to predict the reward for each action un- der an L2 loss.
1701.00299#17
1701.00299#19
1701.00299
[ "1511.06297" ]
1701.00299#19
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
We use mini-batch gradient descent; for each training example in a mini-batch, we pick the action with the largest Q, execute the rest of the network, observe a reward, and perform backpropagation using the L2 loss in Eqn. 1. During training we also perform e-greedy exploration â instead of always choosing the action with the best Q value, we choose a random action with probability «. The hyper- parameter ¢ is initialized to 1 and decreases over time. The reward r is user defined. Since our goal is to optimize the trade-off between accuracy and efficiency, in our experi- ments we define the reward as a combination of an accuracy metric A (for example, F-score) and an efficiency metric (for example, the inverse of the number of multiplications), that is, 1A + (1 â \)E where X balances the trade-off. Mini-Bags for Set-Based Metrics Our training algorithm so far has deï¬ ned the state as a single training example, i.e., the control node takes actions and observes rewards on each training example independent of others. This setup, however, introduces a difï¬ culty for optimizing for accuracy metrics that cannot be decomposed over individual exam- ples. Consider precision in the context of binary classiï¬ ca- tion. Given predictions on a set of examples and the ground truth, precision is deï¬ ned as the proportion of true positives among the predicted positives. Although precision can be deï¬ ned on a single example, precision on a set of examples does not generally equal the average of the precisions of individual examples. In other words, precision as a metric does not decompose over individual examples and can only be computed using a set of examples jointly. This is differ- ent from decomposable metrics such as error rate, which can be computed as the average of the error rates of individ- ual examples. If we use precision as our accuracy metric, it is not clear how to deï¬ ne a reward independently for each example such that maximizing this reward independently for each example would optimize the overall precision. In general, for many metrics, including precision and F-score, we cannot compute them on individual examples and aver- age the results. Instead, we must compute them using a set of examples as a whole. We call such metrics â set-based metricsâ
1701.00299#18
1701.00299#20
1701.00299
[ "1511.06297" ]
1701.00299#20
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
. Our learning setup so far is ill-equipped for such metrics because a reward is deï¬ ned on each example inde- pendently. To address this issue we generalize the deï¬ nition of a state from a single input to a set of inputs. We deï¬ ne such a set of inputs as a mini-bag. With a mini-bag of images, any set-based metric can be computed and can be used to di- rectly deï¬ ne a reward. Note that a mini-bag is different from a mini-batch which is commonly used for batch updates in gradient decent methods.
1701.00299#19
1701.00299#21
1701.00299
[ "1511.06297" ]
1701.00299#21
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Actually in our training, we cal- culate gradients using a mini-batch of mini-bags. Now, an action on a mini-bag s = (s1, . . . , sm) is now a joint action a = (a1, . . . , am) consisting of individual actions ai on ex- ample si. Let Q(s, a) be the joint action-value function on the mini-bag s and the joint action a. We constrain the para- metric form of Q to decompose over individual examples: Q= 35 Wi,4:), (2) where Q(si, ai) is a score given by the control node when choosing the action ai for example si.
1701.00299#20
1701.00299#22
1701.00299
[ "1511.06297" ]
1701.00299#22
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
We then deï¬ ne our new learning objective on a mini-bag of size m as m HY Qs, ai))â , () =(r-â Q (s,a))? where r is the reward observed by choosing the joint action a on mini-bag s. That is, the control node predicts an action- value for each example such that their sum approximates the reward deï¬ ned on the whole mini-bag. It is worth noting that the decomposition of Q into sums the best joint action aâ (Eqn. 2) enjoys a nice property: under the joint action-value Q(s, a) is simply the concate- nation of the best actions for individual examples because maximizing at = arg max(Q(s, a)) = argmax() | Q(si,a:)) (4) i=1 is equivalent to maximizing the individual summands: aâ i = arg max ai Q(si, ai), i = 1, 2...m. (5)
1701.00299#21
1701.00299#23
1701.00299
[ "1511.06297" ]
1701.00299#23
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
That is, during test time we still perform inference on each example independently. Another implication of the mini-bag formulation is: râ De Asy aj)) y) 2A(si a) â , (6) Ox; a= where xi is the output of any internal neuron for example i in the mini-bag. This shows that there is no change to the implementation of backpropagation except that we scale the gradient using the difference between the mini-bag Q-value Q and reward r. Joint Training of All Nodes We have described how to train a single control node. We now describe how to extend this strategy to all nodes including additional control nodes as well as regular nodes. If a D2NN has multiple control nodes, we simply train them together. For each mini-bag, we perform backpropagation for multiple losses together.
1701.00299#22
1701.00299#24
1701.00299
[ "1511.06297" ]
1701.00299#24
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Speciï¬ cally, we perform inference using the current param- eters, observe a reward for the whole network, and then use the same reward (which is a result of the actions of all con- trol nodes) to backpropagate for each control node. For regular nodes, we can place losses on them the same as on conventional DNNs. And we perform backpropaga- tion on these losses together with the control nodes. The implementation of backpropagation is the same as conven- tional DNNs except that each training example have a dif- ferent network topology (execution sequence). And if a node is skipped for a particular training example, then the node does not have a gradient from the example. It is worth noting that our D2NN framework allows arbi- trary losses to be used for regular nodes.
1701.00299#23
1701.00299#25
1701.00299
[ "1511.06297" ]
1701.00299#25
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
For example, for classiï¬ cation we can use the cross-entropy loss on a regu- lar node. One important detail is that the losses on regular nodes need to be properly weighted against the losses on the control nodes; otherwise the regular losses may dominate, rendering the control nodes ineffective. One way to elimi- nate this issue is to use Q-learning losses on regular nodes as well, i.e. treating the outputs of a regular node as action- values. For example, instead of using the cross-entropy loss on the classiï¬ cation scores, we treat the classiï¬ cation scores as action-valuesâ an estimated reward of each classiï¬ cation decision. This way Q-learning is applied to all nodes in a uniï¬ ed way and no additional hyperparameters are needed to balance different kinds of losses. In our experiments un- less otherwise noted we adopt this uniï¬ ed approach. # 5. Experiments
1701.00299#24
1701.00299#26
1701.00299
[ "1511.06297" ]
1701.00299#26
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
We here demonstrate four D2NN structures motivated by different demands of efï¬ cient network design to show its ï¬ exibility and effectiveness, and compare D2NNsâ ability to optimize efï¬ ciency-accuracy trade-offs with prior work. We implement the D2NN framework in Torch. Torch provides functions to specify the subnetwork architecture inside a function node. Our framework handles the high- level communication and loss propagation. High-Low Capacity D2NN Our ï¬ rst experiment is with a simple D2NN architecture that we call â high-low capacity D2NNâ
1701.00299#25
1701.00299#27
1701.00299
[ "1511.06297" ]
1701.00299#27
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
. It is motivated by that we can save computation by choosing a low-capacity subnetwork for easy examples. It consists of a single control nodes (Q) and three regular nodes (N1-N3) as in Fig. 3a). The control node Q chooses between a high-capacity N2 and a low-capacity N3; the N3 has fewer neurons and uses less computation. The control node itself has orders of magnitude fewer computation than regular nodes (this is true for all D2NNs demonstrated). We test this hypothesis using a binary classiï¬ cation task in which the network classiï¬
1701.00299#26
1701.00299#28
1701.00299
[ "1511.06297" ]
1701.00299#28
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
es an input image as face or non-face. We use the Labeled Faces in the Wild [19, 22] dataset. Speciï¬ cally, we use the 13k ground truth face crops (112à 112 pixels) as positive examples and randomly sampled 130k background crops (with an intersection over union less than 0.3) as negative examples. We hold out 11k 8 @ a) High-Low (LFW-B) b) Cascade (LFW-B) c) Chain (LFW-B) d) Hierarchy (ILSVRC-10) 0.8 1 d 1 a a 08 2 8 2 G06 0.6 807 S09 â â D2NN 3 0.4 Y- 206 â â D2NN L 8 Q2 â â D2NN 0.2 0.5 â *= static NNs LJ o* 4_NN 0 04 0.8 0 0 0.20.40.60.8 1 0 0.20.40.60.8 1 0 0.20.40.60.8 1 0.2040.60.8 1 cost cost cost cost Figure 2. The accuracy-cost or fscore-cost curves of various D2NN architectures, as well as conventional DNN baselines consisting of only regular nodes. a) High-Low b) Cascade N1 N2 A -d c) Chain d) Hierarchy Figure 3. Four different D2NN architectures. images for validation and 22k for testing. We refer to this dataset as LFW-B and use it as a testbed to validate the ef- fectiveness of our new D2NN framework. To evaluate performace we measure accuracy using the F1 score, a better metric than percentage of correct pre- dictions for an unbalanced dataset. We measure computa- tional cost using the number of multiplications following prior work [2, 27] and for reproductivity.
1701.00299#27
1701.00299#29
1701.00299
[ "1511.06297" ]
1701.00299#29
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Speciï¬ cally, we use the number of multiplications (control nodes included), normalized by a conventional DNN consisting of N1 and N2, that is, the high-capacity execution path. Note that our D2NNs also allow to use other efï¬ ciency measurement such as run-time, latency. During training we deï¬ ne the Q-learning reward as a lin- ear combination of accuracy A and efï¬ ciency E (negative cost): r = λA + (1 â λ)E where λ â [0, 1]. We train instances of high-low capacity D2NNs using different λâ
1701.00299#28
1701.00299#30
1701.00299
[ "1511.06297" ]
1701.00299#30
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
s. As λ increases, the learned D2NN trades off efï¬ ciency for accuracy. Fig. 2a) plots the accuracy-cost curve on the test set; it also plots the accuracy and efï¬ ciency achieved by a conventional DNN with only the high capacity path N1+N2 (High NN) and a conventional DNN with only the low ca- pacity path N1+N3 (Low NN). As we can see, the D2NN achieves a trade-off curve close to the upperbound: there are points on the curve that are as fast as the low-capacity node and as accurate as the high-capacity node. Fig. 4(left) plots the distribution of ex- amples going through different execution paths. It shows that as λ increases, accuracy becomes more important and more examples go through the high-capacity node.
1701.00299#29
1701.00299#31
1701.00299
[ "1511.06297" ]
1701.00299#31
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
These results suggest that our learning algorithm is effective for networks with a single control node. With inference efï¬ ciency improved, we also observe that for training, a D2NN typically takes 2-4 times more iter- ations to converge than a DNN, depending on particular model capacities, conï¬ gurations and trade-offs. Cascade D2NN We next experiment with a more sophisti- cated design that we call a â cascade D2NNâ (Fig. 3b). It is inspired by the standard cascade design commonly used in computer vision. The intuition is that many negative ex- amples may be rejected early using simple features. The cascade D2NN consists of seven regular nodes (N1-N7) and three control nodes (Q1-Q3). N1-N7 form 4 cascade stages (i.e. 4 conventional DNNs, from small to large) of the cas- cade: N1+N2, N3+N4, N5+N6, N7. Each control node de- cides whether to execute the next cascade stage or not. We evaluate the network on the same LFW-B face clas- siï¬ cation task using the same evaluation protocol as in the high-low capacity D2NN. Fig. 2b) plots the accuracy- cost tradeoff curve for the D2NN. Also included are the accuracy-cost curve (â static NNsâ ) achieved by the four conventional DNNs as baselines, each trained with a cross- entropy loss. We can see that the cascade D2NN can achieve a close to optimal trade-off, reducing computation signiï¬ - cantly with negligible loss of accuracy. In addition, we can see that our D2NN curve outperforms the trade-off curve achieved by varying the design and capacity of static con- ventional networks. This result demonstrates that our al- gorithm is successful for jointly training multiple control nodes. For a cascade, wall time of inference is often an impor- tant consideration. Thus we also measure the inference wall time (excluding data loading with 5 runs) in this Cascade D2NN.
1701.00299#30
1701.00299#32
1701.00299
[ "1511.06297" ]
1701.00299#32
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
We ï¬ nd that a 82% wall-time cost corresponds to a 53% number-of-multiplication cost; and a 95% corresponds to a 70%. Deï¬ ning reward directly using wall time can fur- ther reduce the gap. Chain D2NN Our third design is a â Chain D2NNâ (Fig. 3c). The network is shaped as a chain, where each link consists of a control node selecting between two (or more) regular nodes. In other words, we perform a sequence of vector-to- vector transforms; for each transform we choose between several subnetworks. One scenario that we can use this D2NN is that the conï¬ guration of a conventional DNN (e.g. number of layers, ï¬ lter sizes) cannot be fully decided. Also, it can simulate shortcuts between any two layers by using an identity function as one of the transforms. This chain D2NN is qualitatively different from other D2NNs with a tree-shaped data graph because it allows two divergent data paths to merge again. That is, the number of possible exe- cution paths can be exponential to the number of nodes. In Fig. 3c), the ï¬ rst link is that Q1 chooses between a low-capacity N2 and a high-capacity N3. If one of them is chosen, the other will output a default value zero. The node N4 adds the outputs of N2 and N3 together.
1701.00299#31
1701.00299#33
1701.00299
[ "1511.06297" ]
1701.00299#33
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Fig. 2c) plots the accuracy-cost curve on the LFW-B task. The two baselines are: a conventional DNN with the lowest capacity path (N1-N2-N5-N8-N10), and a conventional DNN with the highest capacity path (N1-N3-N6-N9-N10). The cost is measured as the number of multiplications, normalized by the cost of the high-capacity baseline. Fig. 2c) shows that the chain D2NN achieves a trade- off curve close to optimal and can speed up computation signiï¬ cantly with little accuracy loss. This shows that our learning algorithm is effective for a D2NN whose data graph is a general DAG instead of a tree. Hierarchical D2NN In this experiment we design a D2NN for hierarchical multiclass classiï¬
1701.00299#32
1701.00299#34
1701.00299
[ "1511.06297" ]