id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1611.01576#24
Quasi-Recurrent Neural Networks
# 5 CONCLUSION Intuitively, many aspects of the semantics of long sequences are context-invariant and can be com- puted in parallel (e.g., convolutionally), but some aspects require long-distance context and must be computed recurrently. Many existing neural network architectures either fail to take advantage of the contextual information or fail to take advantage of the parallelism. QRNNs exploit both parallelism and context, exhibiting advantages from both convolutional and recurrent neural networks. QRNNs have better predictive accuracy than LSTM-based models of equal hidden size, even though they use fewer parameters and run substantially faster. Our experiments show that the speed and accuracy advantages remain consistent across tasks and at both word and character levels. Extensions to both CNNs and RNNs are often directly applicable to the QRNN, while the modelâ s hidden states are more interpretable than those of other recurrent architectures as its channels main- tain their independence across timesteps. We believe that QRNNs can serve as a building block for long-sequence tasks that were previously impractical with traditional RNNs.
1611.01576#23
1611.01576#25
1611.01576
[ "1605.07725" ]
1611.01576#25
Quasi-Recurrent Neural Networks
8 # Under review as a conference paper at ICLR 2017 # REFERENCES Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015. David Balduzzi and Muhammad Ghifary. Strongly-typed recurrent neural networks. In ICML, 2016. James Bradbury and Richard Socher. MetaMind neural machine translation system for WMT 2016. In Proceedings of the First Conference on Machine Translation, Berlin, Germany. Association for Computational Linguistics, 2016.
1611.01576#24
1611.01576#26
1611.01576
[ "1605.07725" ]
1611.01576#26
Quasi-Recurrent Neural Networks
Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In NIPS, 2016. Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Computation, 9(8): 1735â 1780, Nov 1997. ISSN 0899-7667. Gao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016. Rie Johnson and Tong Zhang. Effective use of word order for text categorization with convolutional neural networks. arXiv preprint arXiv:1412.1058, 2014. Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099, 2016. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M.
1611.01576#25
1611.01576#27
1611.01576
[ "1605.07725" ]
1611.01576#27
Quasi-Recurrent Neural Networks
Rush. Character-aware neural language models. arXiv preprint arXiv:1508.06615, 2016. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classiï¬ cation with deep convo- lutional neural networks. In NIPS, 2012. David Krueger, Tegan Maharaj, J´anos Kram´ar, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al.
1611.01576#26
1611.01576#28
1611.01576
[ "1605.07725" ]
1611.01576#28
Quasi-Recurrent Neural Networks
Zoneout: Regu- larizing RNNs by Randomly Preserving Hidden Activations. arXiv preprint arXiv:1606.01305, 2016. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. In ICML, 2016. Jason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully character-level neural machine translation without explicit segmentation. arXiv preprint arXiv:1610.03017, 2016. Shayne Longpre, Sabeek Pradhan, Caiming Xiong, and Richard Socher.
1611.01576#27
1611.01576#29
1611.01576
[ "1605.07725" ]
1611.01576#29
Quasi-Recurrent Neural Networks
A way out of the odyssey: Analyzing and combining recent insights for LSTMs. Submitted to ICLR, 2016. M. T. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural machine translation. In EMNLP, 2015. Andrew L Maas, Andrew Y Ng, and Christopher Potts. Multi-dimensional sentiment analysis with learned representations. Technical report, 2011. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016. Gr´egoire Mesnil, Tomas Mikolov, Marcâ Aurelio Ranzato, and Yoshua Bengio. Ensemble of gen- erative and discriminative techniques for sentiment analysis of movie reviews. arXiv preprint arXiv:1412.5335, 2014. Tomas Mikolov, Martin Karaï¬ Â´at, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur.
1611.01576#28
1611.01576#30
1611.01576
[ "1605.07725" ]
1611.01576#30
Quasi-Recurrent Neural Networks
Recurrent neural network based language model. In INTERSPEECH, 2010. 9 # Under review as a conference paper at ICLR 2017 Takeru Miyato, Andrew M Dai, and Ian Goodfellow. Virtual adversarial training for semi-supervised text classiï¬ cation. arXiv preprint arXiv:1605.07725, 2016. Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global vectors for word representation. In EMNLP, 2014.
1611.01576#29
1611.01576#31
1611.01576
[ "1605.07725" ]
1611.01576#31
Quasi-Recurrent Neural Networks
Marcâ Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. In ICLR, 2016. Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 2012. Seiya Tokui, Kenta Oono, and Shohei Hido. Chainer: A next-generation open source framework for deep learning. Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classiï¬ cation. In ACL, 2012. Xin Wang, Yuanchao Liu, Chengjie Sun, Baoxun Wang, and Xiaolong Wang. Predicting polarities of tweets by composing word embeddings with long short-term memory. In ACL, 2015.
1611.01576#30
1611.01576#32
1611.01576
[ "1605.07725" ]
1611.01576#32
Quasi-Recurrent Neural Networks
Sam Wiseman and Alexander M Rush. Sequence-to-sequence learning as beam-search optimization. arXiv preprint arXiv:1606.02960, 2016. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Googleâ s neural machine trans- arXiv preprint lation system: Bridging the gap between human and machine translation. arXiv:1609.08144, 2016. Yijun Xiao and Kyunghyun Cho.
1611.01576#31
1611.01576#33
1611.01576
[ "1605.07725" ]
1611.01576#33
Quasi-Recurrent Neural Networks
Efï¬ cient character-level document classiï¬ cation by combining convolution and recurrent layers. arXiv preprint arXiv:1602.00367, 2016. Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. In ICML, 2016. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014. Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas- siï¬ cation. In NIPS, 2015. Chunting Zhou, Chonglin Sun, Zhiyuan Liu, and Francis Lau. A C-LSTM neural network for text classiï¬ cation. arXiv preprint arXiv:1511.08630, 2015.
1611.01576#32
1611.01576#34
1611.01576
[ "1605.07725" ]
1611.01576#34
Quasi-Recurrent Neural Networks
10 # Under review as a conference paper at ICLR 2017 # APPENDIX BEAM SEARCH RANKING CRITERION The modiï¬ ed log-probability ranking criterion we used in beam search for translation experiments is: T+a Tirg + log(Peana) = TT. tre T Ss log(p(wi|w1 ... wi-1)), (9) i=l where α is a length normalization parameter (Wu et al., 2016), wi is the ith output character, and Ttrg is a â target lengthâ equal to the source sentence length plus ï¬ ve characters. This reduces at α = 0 to ordinary beam search with probabilities: T log(Peana) = > log(p(wilwr -.. wi-1)), (10) i=1 and at α = 1 to beam search with probabilities normalized by length (up to the target length): T 1 log(Peana) © a y log(p(wi|wi ... wi_1))- (1) i=1 Conveniently, this ranking criterion can be computed at intermediate beam-search timesteps, obvi- ating the need to apply a separate reranking on complete hypotheses. 11
1611.01576#33
1611.01576
[ "1605.07725" ]
1611.01600#0
Loss-aware Binarization of Deep Networks
8 1 0 2 y a M 0 1 ] E N . s c [ 3 v 0 0 6 1 0 . 1 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # LOSS-AWARE BINARIZATION OF DEEP NETWORKS Lu Hou, Quanming Yao, James T. Kwok Department of Computer Science and Engineering Hong Kong University of Science and Technology Clear Water Bay, Hong Kong {lhouab,qyaoaa,jamesk}@cse.ust.hk # ABSTRACT
1611.01600#1
1611.01600
[ "1605.04711" ]
1611.01600#1
Loss-aware Binarization of Deep Networks
Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This greatly reduces the network size, and replaces the underlying multiplications to additions or even XNOR bit operations. However, existing binarization schemes are based on simple matrix approximations and ignore the effect of binarization on the loss. In this paper, we propose a proximal Newton algorithm with diag- onal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights. The underlying proximal step has an efï¬ cient closed-form solution, and the second-order information can be efï¬ ciently obtained from the second moments already computed by the Adam optimizer.
1611.01600#0
1611.01600#2
1611.01600
[ "1605.04711" ]
1611.01600#2
Loss-aware Binarization of Deep Networks
Experiments on both feedforward and recurrent networks show that the proposed loss-aware binarization algorithm out- performs existing binarization schemes, and is also more robust for wide and deep networks. # INTRODUCTION Recently, deep neural networks have achieved state-of-the-art performance in various tasks such as speech recognition, visual object recognition, and image classiï¬ cation (LeCun et al., 2015). Though powerful, the large number of network weights leads to space and time inefï¬ ciencies in both training and storage. For instance, the popular AlexNet, VGG-16 and Resnet-18 all require hundred of megabytes to store, and billions of high-precision operations on classiï¬ cation. This limits its use in embedded systems, smart phones and other portable devices that are now everywhere. To alleviate this problem, a number of approaches have been recently proposed.
1611.01600#1
1611.01600#3
1611.01600
[ "1605.04711" ]
1611.01600#3
Loss-aware Binarization of Deep Networks
One attempt ï¬ rst trains a neural network and then compresses it (Han et al., 2016; Kim et al., 2016). Instead of this two-step approach, it is more desirable to train and compress the network simultaneously. Example approaches include tensorizing (Novikov et al., 2015), parameter quantization (Gong et al., 2014), and binarization (Courbariaux et al., 2015; Hubara et al., 2016; Rastegari et al., 2016). In particular, binarization only requires one bit for each weight value.
1611.01600#2
1611.01600#4
1611.01600
[ "1605.04711" ]
1611.01600#4
Loss-aware Binarization of Deep Networks
This can signiï¬ cantly reduce storage, and also eliminates most multiplications during the forward pass. Courbariaux et al. (2015) pioneered neural network binarization with the BinaryConnect algorithm, which achieves state-of-the-art results on many classiï¬ cation tasks. Besides binarizing the weights, Hubara et al. (2016) further binarized the activations. Rastegari et al. (2016) also learned to scale the binarized weights, and obtained better results. Besides, they proposed the XNOR-network with both weights and activations binarized as in (Hubara et al., 2016). Instead of binarization, ternary-connect quantizes each weight to {â 1, 0, 1} (Lin et al., 2016). Similarly, the ternary weight network (Li & Liu, 2016) and DoReFa-net (Zhou et al., 2016) quantize weights to three levels or more. However, though using more bits allows more accurate weight approximations, specialized hardwares are needed for the underlying non-binary operations. Besides the huge amount of computation and storage involved, deep networks are difï¬ cult to train because of the highly nonconvex objective and inhomogeneous curvature. To alleviate this problem, Hessian-free methods (Martens & Sutskever, 2012) use the second-order information by conjugate gradient. A related method is natural gradient descent (Pascanu & Bengio, 2014), which utilizes ge-
1611.01600#3
1611.01600#5
1611.01600
[ "1605.04711" ]
1611.01600#5
Loss-aware Binarization of Deep Networks
1 Published as a conference paper at ICLR 2017 ometry of the underlying parameter manifold. Another approach uses element-wise adaptive learn- ing rate, as in Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), RMSprop (Tieleman & Hinton, 2012), and Adam Kingma & Ba (2015). This can also be considered as preconditioning that rescales the gradient so that all dimensions have similar curvatures. In this paper, instead of directly approximating the weights, we propose to consider the effect of binarization on the loss during binarization. We formulate this as an optimization problem using the proximal Newton algorithm (Lee et al., 2014) with a diagonal Hessian. The crux of proximal algorithms is the proximal step. We show that this step has a closed-form solution, whose form is similar to the use of element-wise adaptive learning rate. The proposed method also reduces to Bi- naryConnect (Courbariaux et al., 2015) and the Binary-Weight-Network (Hubara et al., 2016) when curvature information is dropped. Experiments on both feedforward and recurrent neural network models show that it outperforms existing binarization algorithms. In particular, BinaryConnect fails on deep recurrent networks because of the exploding gradient problem, while the proposed method still demonstrates robust performance.
1611.01600#4
1611.01600#6
1611.01600
[ "1605.04711" ]
1611.01600#6
Loss-aware Binarization of Deep Networks
â Notations: For a vector x, \/x denotes the element-wise square root, |x| denotes the element-wise absolute value, ||x||,) = (0; |x|?) is the p-norm of x, x + 0 denotes that all entries of x are positive, sign(x) is the vector with [sign(x)]; = lif x; > 0 and â 1 otherwise, and Diag(x) returns a diagonal matrix with x on the diagonal. For two vectors x and y, x © y denotes the element- wise multiplication and x @ y denotes the element-wise division. For a matrix X, vec(X) returns the vector obtained by stacking the columns of X, and diag(X) returns a diagonal matrix whose diagonal elements are extracted from diagonal of X.
1611.01600#5
1611.01600#7
1611.01600
[ "1605.04711" ]
1611.01600#7
Loss-aware Binarization of Deep Networks
# 2 RELATED WORK 2.1 WEIGHT BINARIZATION IN DEEP NETWORKS In a feedforward neural network with L layers, let the weight matrix (or tensor in the case of a convolutional layer) at layer 1 be W;. We combine the (full-precision) weights from all layers as w =[w],w3,...,w/]', where w; = vec(W,). Analogously, the binarized weights are denoted as W = [Ww] ,WJ,..., W]]". As it is essential to use full-precision weights during updates (2015), typically binarized weights are only used during the forward and backward propagations, but not on parameter update. At the ¢th iteration, the (full-precision) weight w} is updated by using the backpropagated gradient V¢(wâ
1611.01600#6
1611.01600#8
1611.01600
[ "1605.04711" ]
1611.01600#8
Loss-aware Binarization of Deep Networks
â ') (where £ is the loss and V;£(w'~') is the partial derivative of ¢ w.r.t. the weights of the /th layer). In the next forward propagation, it is then binarized as W} = Binarize(w/), where Binarize(-) is some binarization scheme. The two most popular binarization schemes are BinaryConnect (Courbariaux et al., 2015) and Binary-Weight-Network (BWN) (Rastegari et al., 2016). In BinaryConnect, binarization is per- formed by transforming each element of wt
1611.01600#7
1611.01600#9
1611.01600
[ "1605.04711" ]
1611.01600#9
Loss-aware Binarization of Deep Networks
l to â 1 or +1 using the sign function:1 l ). (1) Besides the binarized weight matrix, a scaling parameter is also learned in BWN. In other words, Binarize(wt l bt l is binary. They are obtained by minimizing the difference between wt t Ww : aj = fil, bj = sign(w/), (2) where nl is the number of weights in layer l. Hubara et al. (2016) further binarized the activations as Ë xt l is the activation of the lth layer at iteration t. 2.2 PROXIMAL NEWTON ALGORITHM The proximal Newton algorithm (Lee et al., 2014) has been popularly used for solving composite optimization problems of the form min x f (x) + g(x), 1A stochastic binarization scheme is also proposed in (Courbariaux et al., 2015). However, it is much more computational expensive than (1) and so will not be considered here.
1611.01600#8
1611.01600#10
1611.01600
[ "1605.04711" ]
1611.01600#10
Loss-aware Binarization of Deep Networks
2 Published as a conference paper at ICLR 2017 where f is convex and smooth, and g is convex but possibly nonsmooth. At iteration t, it generates the next iterate as Xi1 = arg min Vf (x1) " (x â x;) + (x â x) H(x â x:) + g(x), where H is an approximate Hessian matrix of f at xt. With the use of second-order information, the proximal Newton algorithm converges faster than the proximal gradient algorithm (Lee et al., 2014). Recently, by assuming that f and g have difference-of-convex decompositions (Yuille & Rangarajan, 2002), the proximal Newton algorithm is also extended to the case where g is nonconvex (Rakotomamonjy et al., 2016).
1611.01600#9
1611.01600#11
1611.01600
[ "1605.04711" ]
1611.01600#11
Loss-aware Binarization of Deep Networks
# 3 LOSS-AWARE BINARIZATION As can be seen, existing weight binarization methods (Courbariaux et al., 2015; Rastegari et al., 2016) simply ï¬ nd the closest binary approximation of w, and ignore its effects to the loss. In this paper, we consider the loss directly during binarization. As in (Rastegari et al., 2016), we also binarize the weight wl in each layer as Ë wl = αlbl, where αl > 0 and bl is binary. In the following, we make the following assumptions on ¢. (A1) ¢ is continuously differentiable with Lipschitz-continuous gradient, i.e., there exists 8 > 0 such that ||V@(u) â Vé(v) ||, < 8 |luâ vl, for any u, v; (A2) ¢ is bounded from below.
1611.01600#10
1611.01600#12
1611.01600
[ "1605.04711" ]
1611.01600#12
Loss-aware Binarization of Deep Networks
3.1 BINARIZATION USING PROXIMAL NEWTON ALGORITHM We formulate weight binarization as the following optimization problem: miny (Ww) (3) st. Wy, =aybi, a > 0, b â ¬ {41}â ¢, L=1,...,L, (4) where ¢ is the loss. Let C' be the feasible region in @. and define its indicator function: I¢(w) = 0 if w â ¬ C, and oo otherwise. Problem can then be rewritten as min (Ww) + Io(w). (5) We solve sing the proximal Newton method (Section[2.2}. At iteration t, the smooth term ¢(w) is replaced by the second-order expansion » nt . ate 1. nt Hlye ate E(w!) + VOW! NT wt _w' D) +4 5 (wt _w' NTH! low! _w' 1, where H'~! is an estimate of the Hessian of @ at wâ ~!. Note that using the Hessian to capture second-order information is essential for efficient neural network training, as @ is often flat in some directions but highly curved in others. By rescaling the gradient, the loss has similar curvatures along all directions. This is also called preconditioning in the literature (Dauphin et al.||2015a). For neural networks, the exact Hessian is rarely positive semi-definite. This can be problematic as the nonconvex objective leads to indefinite quadratic optimization. Moreover, computing the exact Hessian is both time- and space-inefficient on large networks. To alleviate these problems, a popular approach is to approximate the Hessian by a diagonal positive definite matrix D. One popular choice is the efficient Jacobi preconditioner. Though an efficient approximation of the Hes- sian under certain conditions, it is not competitive for indefinite matrices (Dauphin et al.| 2015a). More recently, it is shown that equilibration provides a more robust preconditioner in the pres- ence of saddle points (Dauphin et al.||2015a). This is also adopted by popular stochastic optimiza- tion algorithms such as RMSprop (Tieleman & Hinton| and Adam (Kingma_& Bal 2015).
1611.01600#11
1611.01600#13
1611.01600
[ "1605.04711" ]
1611.01600#13
Loss-aware Binarization of Deep Networks
Specifically, the second moment v in these algorithms is an estimator of diag(Hâ ) (Dauphin et al 20156). Here, we use the square root of this v, which is readily available in Adam, to construct D = Diag([diag(D,)",...,diag(Dz)"]"), where Dy is the approximate diagonal Hessian at layer 1. In general, other estimators of diag(H) can also be used. At the tth iteration of the proximal Newton algorithm, the following subproblem is solved:
1611.01600#12
1611.01600#14
1611.01600
[ "1605.04711" ]
1611.01600#14
Loss-aware Binarization of Deep Networks
mings Ve(w'!) Tw! â wi!) + a w" â w TD lw! â wit) (6) st. Wy =ajbi, af > 0, bye {+1}", l=1,...,L. 3 Published as a conference paper at ICLR 2017 Proposition 3.1 Let dtâ 1 l â ¡ diag(Dtâ 1 ), and # l l â ¡ Ë wtâ 1 Ww; ew! - View!) ody ?. (7)
1611.01600#13
1611.01600#15
1611.01600
[ "1605.04711" ]
1611.01600#15
Loss-aware Binarization of Deep Networks
The optimal solution of (6) can be obtained in closed-form as + _ lid tO wilh a= aT, , bj = sign(w7). (8) Theorem 3.1 Assume that [dt algorithm (with closed-form update of Ë wt in Proposition 3.1) converges. l]k > β â l, k, t, the objective of (5) produced by the proximal Newton Note that both the loss @ and indicator function Jc(-) in (5) are not convex. Hence, convergence analysis of the proximal Newton algorithm in (Lee et al.}/2014), which is only for convex problems, cannot be applied. Recently, |Rakotomamonjy et al.|(2016) proposed a nonconvex proximal Newton extension. However, it assumes a difference-of-convex decomposition which does not hold here. Remark 3.1 When Dtâ 1 l = λI, i.e., the curvature is the same for all dimensions in the lth layer, (8) then reduces to the BWN solution in (2) In other words, BWN corresponds to using the proximal gradient algorithm, while the proposed method corresponds to the proximal Newton algorithm with diagonal Hessian. In composite optimization, it is known that the proximal Newton method is more efï¬ cient than the proximal gradient algorithm (Lee et al., 2014; Rakotomamonjy et al., 2016). Remark 3.2 When αt l = 1, (8) reduces to sign(wt l ), which is the BinaryConnect solution in (1). From (7) and (8). each iteration first performs gradient descent along V/¢(w!~+) with an adaptive learning rate ad, and then projects it to a binary solution. As discussed in (Courbariaux| fet al.|/2015) 2015), it is important to keep a full-precision weight during training. Hence, we replace (7) by wy + wi â
1611.01600#14
1611.01600#16
1611.01600
[ "1605.04711" ]
1611.01600#16
Loss-aware Binarization of Deep Networks
Vil(w'!) @ di~ 1 The whole procedure, which will be called Loss-Aware Binarization (LAB), is shown i in Algorithm|]] [1] In steps 5 and 6, following (Li & Liu\ 2016}, we first rescale input x) 1 to the Ith layer with q;, so that multiplications in dot products and convolutions become additions. While binarizing weights changes most multiplications to additions, binarizing both weights and activations saves even more computations as additions are further changed to XNOR bit operations (Hubara et al., 2016). Our Algorithm 1 can also be easily extended by binarizing the activations with the simple sign function. 3.2 EXTENSION TO RECURRENT NEURAL NETWORKS The proposed method can be easily extended to recurrent neural networks. Let xl and hl be the input and hidden states, respectively, at time step (or depth) l. A typical recurrent neural network has a recurrence of the form hl = Wxxl + WhÏ (hlâ 1) + b (equivalent to the more widely known hl = Ï (Wxxl +Whhlâ 1 +b) (Pascanu et al., 2013) ). We binarize both the input-to-hidden weight Wx and hidden-to-hidden weight Wh. Since weights are shared across time in a recurrent network, we only need to binarize Wx and Wh once in each forward propagation. Besides weights, one can also binarize the activations (of the inputs and hidden states) as in the previous section. In deep networks, the backpropagated gradient takes the form of a product of Jacobian matrices (Pas-| feanu etal etal ). In a vanilla recurrent neural networkP']for activations h, and hy at depths p and q, F oh, . respectively (where p > q), ae = TIy<i<p ot = Tyect<p W/' diag(oâ (hy-1)). The necessary condition for exploding gradients is that the largest singular value \;(W),) of W), is larger than some given constant ( Pascanu et al] BOTS] The following Proposition shows that for any binary W),, its largest singular value is lower- bounded by the square root of its dimension. Proposition 3.2 For any W â {â 1, +1}mà n (m â
1611.01600#15
1611.01600#17
1611.01600
[ "1605.04711" ]
1611.01600#17
Loss-aware Binarization of Deep Networks
¤ n), λ1(W) â ¥ â # Vn. 2Here, we consider the vanilla recurrent neural network for simplicity. It can be shown that a similar behavior holds for the more commonly used LSTM. 4 Published as a conference paper at ICLR 2017 Algorithm 1 Loss-Aware Binarization (LAB) for training a feedforward neural network. Input: Minibatch {(xâ , yâ )}, current full-precision weights {w/}, first moment {mj~'}, moment {vj +}, and learning rate 7°. Forward Propagation for! = 1to Ldo at = Iai towilh, ay" f 7 bf = sign(w!); rescale the layer-! input: X/_, = a} x]_13 compute z/ with input x/_, and binary weight b/; apply batch-normalization and nonlinear activation to z/ to obtain x}; end for 9: compute the loss ¢ using xâ , and y*; : Backward Propagation oe : initialize output layerâ s activationâ s gradient Oxi 12: for! = L to2do 13: compute Can using oa a} and by; 14: end for 15: Update parameters using Adam 16: for! = 1 to Ldo 17: compute gradients V,/(w*) using ox and x}_1; 18: update first moment mj = Bimj-t +(1â Bi) Vil(w'); 19: update second moment v} = fav; ! + (1 â 52)(Vil(w') © Vil(w")); 20: compute unbiased first moment m{ = m/j/(1 â ff); 21: compute unbiased second moment 0; = vj/(1 â 84); 22: compute current curvature matrix dj = = (a + Va): 23: update full-precision weights w/t! = wi} â mi @ di; 24: update learning rate 7+! = UpdateRule(7â , t + 1); 25: end for # }, second
1611.01600#16
1611.01600#18
1611.01600
[ "1605.04711" ]
1611.01600#18
Loss-aware Binarization of Deep Networks
Thus, with weight binarization as in BinaryConnect, the exploding gradient problem becomes more severe as the weight matrices are often large. On the other hand, recall that λ1(c Ë Wh) = cλ1( Ë Wh) for any non-negative c. The proposed method alleviates this exploding gradient problem by adap- tively learning the scaling parameter αh. # 4 EXPERIMENTS In this section, we perform experiments on the proposed binarization scheme with both feedforward networks (Sections 4.1 and 4.2) and recurrent neural networks (Sections 4.3 and 4.4). 4.1 FEEDFORWARD NEURAL NETWORKS We compare the original full-precision network (without binarization) with the following weight- binarized networks: (i) BinaryConnect; (ii) Binary-Weight-Network (BWN); and (iii) the proposed Loss-Aware Binarized network (LAB). We also compare with networks having both weights and activations binarized:3 (i) BinaryNeuralNetwork (BNN) (Hubara et al., 2016), the weight-and- activation binarized counterpart of BinaryConnect; (ii) XNOR-Network (XNOR) (Rastegari et al., 2016), the counterpart of BWN; (iii) LAB2, the counterpart of the proposed method, which binarizes weights using proximal Newton method and binarizes activations using a simple sign function. The setup is similar to that in Courbariaux et al. (2015). We do not perform data augmentation or unsupervised pretraining. Experiments are performed on three commonly used data sets: 3We use the straight-through-estimator (Hubara et al., 2016) to compute the gradient involving the sign function.
1611.01600#17
1611.01600#19
1611.01600
[ "1605.04711" ]
1611.01600#19
Loss-aware Binarization of Deep Networks
5 Published as a conference paper at ICLR 2017 1. MNIST: This contains 28 Ã 28 gray images from ten digit classes. We use 50000 images for training, another 10000 for validation, and the remaining 10000 for testing. We use the 4-layer model: 784F C â 2048F C â 2048F C â 2048F C â 10SV M, where F C is a fully-connected layer, and SV M is a L2-SVM output layer using the square hinge loss.
1611.01600#18
1611.01600#20
1611.01600
[ "1605.04711" ]
1611.01600#20
Loss-aware Binarization of Deep Networks
Batch normalization, with a minibatch size 100, is used to accelerate learning. The maximum number of epochs is 50. The learning rate for the weight-binarized (resp. weight-and-activation-binarized) network starts at 0.01 (resp. 0.005), and decays by a fac- tor of 0.1 at epochs 15 and 25. 2. CIFAR-10: This contains 32 Ã 32 color images from ten object classes. We use 45000 images for training, another 5000 for validation, and the remaining 10000 for testing. The images are preprocessed with global contrast normalization and ZCA whitening. We use the VGG-like architecture:
1611.01600#19
1611.01600#21
1611.01600
[ "1605.04711" ]
1611.01600#21
Loss-aware Binarization of Deep Networks
(2Ã 128C3)â M P 2â (2Ã 256C3)â M P 2â (2Ã 512C3)â M P 2â (2Ã 1024F C)â 10SV M, where C3 is a 3 Ã 3 ReLU convolution layer, and M P 2 is a 2 Ã 2 max-pooling layer. Batch normalization, with a minibatch size of 50, is used. The maximum number of epochs is 200. The learning rate for the weight-binarized (resp. weight-and-activation-binarized) network starts at 0.03 (resp. 0.02), and decays by a factor of 0.5 after every 15 epochs. 3.
1611.01600#20
1611.01600#22
1611.01600
[ "1605.04711" ]
1611.01600#22
Loss-aware Binarization of Deep Networks
SVHN: This contains 32 Ã 32 color images from ten digit classes. We use 598388 images for training, another 6000 for validation, and the remaining 26032 for testing. The images are preprocessed with global and local contrast normalization. The model used is: (2Ã 64C3)â M P 2â (2Ã 128C3)â M P 2â (2Ã 256C3)â M P 2â (2Ã 1024F C)â 10SV M.
1611.01600#21
1611.01600#23
1611.01600
[ "1605.04711" ]
1611.01600#23
Loss-aware Binarization of Deep Networks
Batch normalization, with a minibatch size of 50, is used. The maximum number of epochs is 50. The learning rate for the weight-binarized (resp. weight-and-activation-binarized) network starts at 0.001 (resp. 0.0005), and decays by a factor of 0.1 at epochs 15 and 25. Since binarization is a form of regularization (Courbariaux et al., 2015), we do not use other reg- ularization methods (like Dropout). All the weights are initialized as in (Glorot & Bengio, 2010). Adam (Kingma & Ba, 2015) is used as the optimization solver.
1611.01600#22
1611.01600#24
1611.01600
[ "1605.04711" ]
1611.01600#24
Loss-aware Binarization of Deep Networks
Table 1 shows the test classiï¬ cation error rates, and Figure 1 shows the convergence of LAB. As can be seen, the proposed LAB achieves the lowest error on MNIST and SVHN. It even outperforms the full-precision network on MNIST, as weight binarization serves as a regularizer. With the use of cur- vature information, LAB outperforms BinaryConnect and BWN. On CIFAR-10, LAB is slightly out- performed by BinaryConnect, but is still better than the full-precision network. Among the schemes that binarize both weights and activations, LAB2 also outperforms BNN and the XNOR-Network.
1611.01600#23
1611.01600#25
1611.01600
[ "1605.04711" ]
1611.01600#25
Loss-aware Binarization of Deep Networks
# Table 1: Test error rates (%) for feedforward neural network models. (no binarization) (binarize weights) (binarize weights and activations) full-precision BinaryConnect BWN LAB BNN XNOR LAB2 MNIST CIFAR-10 1.190 1.280 1.310 1.180 1.470 1.530 1.380 11.900 9.860 10.510 10.500 12.870 12.620 12.280 SVHN 2.277 2.450 2.535 2.354 3.500 3.435 3.362 4.2 VARYING THE NUMBER OF FILTERS IN CNN As in Zhou et al. (2016), we study sensitivity to network width by varying the number of ï¬ lters K on the SVHN data set. As in Section 4.1, we use the model (2 à KC3) â M P 2 â (2 à 2KC3) â M P 2 â (2 à 4KC3) â M P 2 â (2 à 1024F C) â 10SV M. Results are shown in Table 2. Again, the proposed LAB has the best performance. Moreover, as the number of ï¬ lters increases, degradation due to binarization becomes less severe. This suggests
1611.01600#24
1611.01600#26
1611.01600
[ "1605.04711" ]
1611.01600#26
Loss-aware Binarization of Deep Networks
6 Published as a conference paper at ICLR 2017 (a) MNIST. (b) CIFAR-10. (c) SVHN. 0.06 soos ae < © 0.02 0 2 1077) 20) 230, 40, 50 epochs ot ; 0.08 80.06 s © 0.04 = 0.02 0 0 10 20 30 40 50 epochs 04 03 3 10.2 ia & 0.1 0 0 50 100 150 200 epochs
1611.01600#25
1611.01600#27
1611.01600
[ "1605.04711" ]
1611.01600#27
Loss-aware Binarization of Deep Networks
Figure 1: Convergence of LAB with feedforward neural networks. that more powerful models (e.g., CNN with more ï¬ lters, standard feedforward networks with more hidden units) are less susceptible to performance degradation due to binarization. We speculate that this is because large networks often have larger-than-needed capacities, and so are less affected by the limited expressiveness of binary weights. Another related reason is that binarization acts as regularization, and so contributes positively to the performance.
1611.01600#26
1611.01600#28
1611.01600
[ "1605.04711" ]
1611.01600#28
Loss-aware Binarization of Deep Networks
Table 2: Test error rates (%) on SVHN, for CNNs with different numbers of ï¬ lters. Number in brackets is the difference between the errors of the binarized scheme and the full-precision network. K = 32 2.585 2.777 (0.192) 2.743 (0.158) 2.742 (0.157) K = 16 2.738 3.200 (0.462) 3.119 (0.461) 3.050 (0.312) K = 64 2.277 2.450 (0.173) 2.535 (0.258) 2.354 (0.077) K = 128 2.146 2.315 (0.169) 2.319 (0.173) 2.200 (0.054) full-precision BinaryConnect BWN LAB
1611.01600#27
1611.01600#29
1611.01600
[ "1605.04711" ]
1611.01600#29
Loss-aware Binarization of Deep Networks
4.3 RECURRENT NEURAL NETWORKS In this section, we perform experiments on the popular long short-term memory (LSTM) (Hochre- iter & Schmidhuber, 1997). Performance is evaluated in the context of character-level language modeling. The LSTM takes as input a sequence of characters, and predicts the next character at each time step. The training objective is the cross-entropy loss over all target sequences. Following Karpathy et al. (2016), we use two data sets (with the same training/validation/test set splitting): (i) Leo Tolstoyâ s War and Peace, which consists of 3258246 characters of almost entirely English text with minimal markup and has a vocabulary size of 87; and (ii) the source code of the Linux Kernel, which consists of 6206996 characters and has a vocabulary size of 101. We use a one-layer LSTM with 512 cells. The maximum number of epochs is 200, and the number of time steps is 100. The initial learning rate is 0.002. After 10 epochs, it is decayed by a factor of 0.98 after each epoch. The weights are initialized uniformly in [0.08, 0.08]. After each iteration, the gradients are clipped to the range [â 5, 5], and all the updated weights are clipped to [â 1, 1]. For the weight-and-activation-binarized networks, we do not binarize the inputs, as they are one-hot vectors in this language modeling task. Table 3 shows the testing cross-entropy values. As in Section 4.1, the proposed LAB outperforms other weight binarization schemes, and is even better than the full-precision network on the Linux Kernel data set. BinaryConnect does not work well here because of the problem of exploding gra- dients (see Section 3.2 and more results in Section 4.4). On the other hand, BWN and the proposed LAB scale the binary weight matrix and perform better. LAB also performs better than BWN as curvature information is considered. Similarly, among schemes that binarize both weights and acti- vations, the proposed LAB2 also outperforms BNN and XNOR-Network.
1611.01600#28
1611.01600#30
1611.01600
[ "1605.04711" ]
1611.01600#30
Loss-aware Binarization of Deep Networks
4.4 VARYING THE NUMBER OF TIME STEPS IN LSTM In this experiment, we study the sensitivity of the binarization schemes with varying numbers of unrolled time steps (T S) in LSTM. Results are shown in Table 4. Again, the proposed LAB has the best performance. When T S = 10, the LSTM is relatively shallow, and all binarization schemes have similar performance as the full-precision network. When T S â ¥ 50, BinaryConnect fails, 7
1611.01600#29
1611.01600#31
1611.01600
[ "1605.04711" ]
1611.01600#31
Loss-aware Binarization of Deep Networks
Published as a conference paper at ICLR 2017 Table 3: Testing cross-entropy values of LSTM. (no binarization) (binarize weights) (binarize weights and activations) full-precision BinaryConnect BWN LAB BNN XNOR LAB2 War and Peace 1.268 2.942 1.313 1.291 3.050 1.424 1.376 Linux Kernel 1.329 3.532 1.307 1.305 3.624 1.426 1.409 while BWN and the proposed LAB perform better (as discussed in Section 3.2). Figure 2 shows the distributions of the hidden-to-hidden weight gradients for T S = 10 and 100. As can be seen, while all models have similar gradient distributions at T S = 10, the gradient values in BinaryConnect are much higher than those of the other algorithms for the deeper network (T S = 100). Table 4: Testing cross-entropy on War and Peace, for LSTMs with different time steps (T S). Differ- ence between cross-entropies of binarized scheme and full-precision network is shown in brackets. T S = 50 1.310 2.980 (1.670) 1.325 (0.015) 1.324 (0.014) T S = 10 1.527 1.528 (0.001) 1.532 (0.005) 1.527 (0.000) T S = 100 1.268 2.942 (1.674) 1.313 (0.045) 1.291 (0.023) full-precision BinaryConnect BWN LAB (a) T S = 10. (b) T S = 100. T S = 150 1.249 2.872 (1.623) 1.311 (0.062) 1.285 (0.036) 100% . . : . [Miut-precision inaryConnect (awn (Has Ei percentage of elements 10 409 10° 107 10% 10° 10% 10° 107 10° 10° 10° 10° â gradient magnitude
1611.01600#30
1611.01600#32
1611.01600
[ "1605.04711" ]
1611.01600#32
Loss-aware Binarization of Deep Networks
100% [Miut-precision ; IN bial percentage of elements 10" 10 10° 107 10° 10° 10% 10° 10? 107 10° 10° 10? â gradient magnitude Figure 2: Distribution of weight gradients on War and Peace, for LSTMs with different time steps. Note from Table 4 that as the time step increases, all except BinaryConnect show better performance. However, degradation due to binarization also becomes more severe. This is because the weights are shared across time steps. Hence, error due to binarization also propagates across time.
1611.01600#31
1611.01600#33
1611.01600
[ "1605.04711" ]
1611.01600#33
Loss-aware Binarization of Deep Networks
# 5 CONCLUSION In this paper, we propose a binarization algorithm that directly considers its effect on the loss during binarization. The binarized weights are obtained using proximal Newton algorithm with diagonal Hessian approximation. The proximal step has an efï¬ cient closed-form solution, and the second- order information in the Hessian can be readily obtained from the Adam optimizer. Experiments show that the proposed algorithm outperforms existing binarization schemes, has comparable per- formance as the original full-precision network, and is also robust for wide and deep networks.
1611.01600#32
1611.01600#34
1611.01600
[ "1605.04711" ]
1611.01600#34
Loss-aware Binarization of Deep Networks
ACKNOWLEDGMENTS This research was supported in part by the Research Grants Council of the Hong Kong Special Administrative Region (Grant 614513). We thank Yongqi Zhang for helping with the experiments, and developers of Theano (Theano Development Team, 2016), Pylearn2 (Goodfellow et al., 2013) and Lasagne. We also thank NVIDIA for the support of Titan X GPU. 8 Published as a conference paper at ICLR 2017 # REFERENCES M. Courbariaux, Y. Bengio, and J.P.
1611.01600#33
1611.01600#35
1611.01600
[ "1605.04711" ]
1611.01600#35
Loss-aware Binarization of Deep Networks
David. BinaryConnect: Training deep neural networks with binary weights during propagations. In NIPS, pp. 3105â 3113, 2015. Y. Dauphin, H. de Vries, and Y. Bengio. Equilibrated adaptive learning rates for non-convex opti- mization. In NIPS, pp. 1504â 1512, 2015a. Y. Dauphin, H. de Vries, J. Chung, and Y. Bengio.
1611.01600#34
1611.01600#36
1611.01600
[ "1605.04711" ]
1611.01600#36
Loss-aware Binarization of Deep Networks
RMSprop and equilibrated adaptive learning rates for non-convex optimization. Technical Report arXiv:1502.04390, 2015b. J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121â 2159, 2011. X. Glorot and Y. Bengio. Understanding the difï¬ culty of training deep feedforward neural networks. In AISTAT, pp. 249â 256, 2010. Y. Gong, L. Liu, M. Yang, and L.
1611.01600#35
1611.01600#37
1611.01600
[ "1605.04711" ]
1611.01600#37
Loss-aware Binarization of Deep Networks
Bourdev. Compressing deep convolutional networks using vector quantization. Technical Report arXiv:1412.6115, 2014. I.J. Goodfellow, D. Warde-Farley, P. Lamblin, V. Dumoulin, M. Mirza, R. Pascanu, J. Bergstra, F. Bastien, and Y. Bengio. Pylearn2: a machine learning research library. arXiv preprint arXiv:1308.4214, 2013.
1611.01600#36
1611.01600#38
1611.01600
[ "1605.04711" ]
1611.01600#38
Loss-aware Binarization of Deep Networks
S. Han, H. Mao, and W.J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and Huffman coding. In ICLR, 2016. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, pp. 1735â 1780, 1997. I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks. In NIPS, pp. 4107â 4115, 2016. A. Karpathy, J. Johnson, and F.-F. Li.
1611.01600#37
1611.01600#39
1611.01600
[ "1605.04711" ]
1611.01600#39
Loss-aware Binarization of Deep Networks
Visualizing and understanding recurrent networks. In ICLR, 2016. Y.-D. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin. Compression of deep convolutional neural networks for fast and low power mobile applications. In ICLR, 2016. D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015. Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436â 444, 2015.
1611.01600#38
1611.01600#40
1611.01600
[ "1605.04711" ]
1611.01600#40
Loss-aware Binarization of Deep Networks
J.D. Lee, Y. Sun, and M.A. Saunders. Proximal Newton-type methods for minimizing composite functions. SIAM Journal on Optimization, 24(3):1420â 1443, 2014. F. Li and B. Liu. Ternary weight networks. Technical Report arXiv:1605.04711, 2016. Z. Lin, M. Courbariaux, R. Memisevic, and Y. Bengio. Neural networks with few multiplications. In ICLR, 2016. J. Martens and I. Sutskever.
1611.01600#39
1611.01600#41
1611.01600
[ "1605.04711" ]
1611.01600#41
Loss-aware Binarization of Deep Networks
Training deep and recurrent networks with Hessian-free optimization. In Neural Networks: Tricks of the trade, pp. 479â 535. Springer, 2012. A. Novikov, D. Podoprikhin, A. Osokin, and D.P. Vetrov. Tensorizing neural networks. In NIPS, pp. 442â 450, 2015. R. Pascanu and Y. Bengio. Revisiting natural gradient for deep networks. In ICLR, 2014. R. Pascanu, T. Mikolov, and Y. Bengio.
1611.01600#40
1611.01600#42
1611.01600
[ "1605.04711" ]
1611.01600#42
Loss-aware Binarization of Deep Networks
On the difï¬ culty of training recurrent neural networks. In ICLR, pp. 1310â 1318, 2013. A. Rakotomamonjy, R. Flamary, and G. Gasso. DC proximal Newton for nonconvex optimization problems. IEEE Transactions on Neural Networks and Learning Systems, 27(3):636â 647, 2016. 9 Published as a conference paper at ICLR 2017 M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. XNOR-Net: ImageNet classiï¬ cation using binary convolutional neural networks. In ECCV, 2016.
1611.01600#41
1611.01600#43
1611.01600
[ "1605.04711" ]
1611.01600#43
Loss-aware Binarization of Deep Networks
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688. T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude, 2012. A.L. Yuille and A. Rangarajan. The concave-convex procedure (CCCP). NIPS, 2:1033â
1611.01600#42
1611.01600#44
1611.01600
[ "1605.04711" ]
1611.01600#44
Loss-aware Binarization of Deep Networks
1040, 2002. M.D. Zeiler. ADADELTA: An adaptive learning rate method. Technical Report arXiv:1212.5701, 2012. S. Zhou, Z. Ni, X. Zhou, H. Wen, Y. Wu, and Y. Zou. DoReFa-Net: Training low bitwidth convolu- tional neural networks with low bitwidth gradients. Technical Report arXiv:1606.06160, 2016.
1611.01600#43
1611.01600#45
1611.01600
[ "1605.04711" ]
1611.01600#45
Loss-aware Binarization of Deep Networks
10 Published as a conference paper at ICLR 2017 # A PROOF OF PROPOSITION 3.1 Denote ||x||¢ = x' Qx, Ve(w't)" (we! -wh) 4 Sw _ wh) Dl w! â w') L 1 . te te _ = 5 tet â 1_ Vyew') @ at Yilbeo +e l=1 1 L = 5 Iw; â willbe +e l=1 nm = SOV la eattbsle â why? +e, [=I i=1 nm = SOV [=I i=1 â 3||Vie(w'1) Odi" have bj = sign(w/). where c, = â 3||Vie(w'1) 1,2,...,L, we have bj nt where c, = â 3||Vie(w'1) Odi" Rea is independent of aj and bj. Since aj > 0,d} > 0,Vl = L 1,2,...,L, we have bj = sign(w/). Moreover, nt nt SLD la talib ih: â wily? er = FSD a Caf ~ lft)? +e l=1 i=1 l=1 i=1 L l=1 \Idj~* |] (a7)? = ||dj? © wi |l1aj + ce, NlR is lla; â ow? =I lay" Ih 1 where c2 = ¢; + $||dj~' © w} © w}||1. Thus, the optimal af is lla; â ow? ha =I lay" Ih .
1611.01600#44
1611.01600#46
1611.01600
[ "1605.04711" ]
1611.01600#46
Loss-aware Binarization of Deep Networks
# B PROOF OF THEOREM 3.1 Let a = [a{...,a4]", and denote the objective in (3) by F(w, a). As w' is the minimizer in (6. we have L(w') + Ve(w'1)T (wt _ wl) +4 Sw" _ wh) Tp lw _ wl) < e(w'-!), (9) From Assumption Al, we have Ow") < e¢w'1) + Vem) T Ww! â wht) + 3 jet â wh (10) Using (9) and (10), we obtain ew) < L(w') _ Sw" _ wt )T(pt! _ BI)(w _ wt) ming. ({d) "Jn â 8) atâ -1 2 atâ 1)|2 < ew) 5 wi -w' ll; - Let c3 = mink,l,t([dtâ 1 # l # "Je ]k â β) > 0. Then, (wt) < e(wt®) â F |p wha (ll) From Assumption A2, ¢ is bounded from below. Together with the fact that {¢(wâ )} is monoton- ically decreasing from (Ip, the sequence {¢(w')} converges, thus the sequence {F(w', a')} also converges. # C PROOF OF PROPOSITION 3.2 Let the singulars values of W be λ1(W) â ¥ λ2(W) â ¥ · · · â ¥ λm(W). 1 m =n. =n â Thus, λ1(W) â ¥ n. 11
1611.01600#45
1611.01600
[ "1605.04711" ]
1611.01578#0
Neural Architecture Search with Reinforcement Learning
7 1 0 2 b e F 5 1 ] G L . s c [ 2 v 8 7 5 1 0 . 1 1 6 1 : v i X r a Under review as a conference paper at ICLR 2017 # NEURAL ARCHITECTURE SEARCH WITH REINFORCEMENT LEARNING # Barret Zophâ , Quoc V. Le Google Brain {barretzoph,qvl}@google.com # ABSTRACT Neural networks are powerful and ï¬ exible models that work well for many difï¬ - cult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a re- current network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that out- performs the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplex- ity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
1611.01578#1
1611.01578
[ "1611.01462" ]
1611.01578#1
Neural Architecture Search with Reinforcement Learning
# INTRODUCTION The last few years have seen much success of deep neural networks in many challenging appli- cations, such as speech recognition (Hinton et al., 2012), image recognition (LeCun et al., 1998; Krizhevsky et al., 2012) and machine translation (Sutskever et al., 2014; Bahdanau et al., 2015; Wu et al., 2016). Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe, 1999), and HOG (Dalal & Triggs, 2005), to AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016a). Although it has become easier, designing architectures still requires a lot of expert knowledge and takes ample time. Sample architecture A with probability p Trains a child network The controller (RNN) with architecture Ato get accuracy R Compute gradient of p and scale it by R to update the controller Figure 1: An overview of Neural Architecture Search. This paper presents Neural Architecture Search, a gradient-based method for ï¬ nding good architec- tures (see Figure 1) . Our work is based on the observation that the structure and connectivity of a â Work done as a member of the Google Brain Residency program (g.co/brainresidency.) 1 # Under review as a conference paper at ICLR 2017 neural network can be typically speciï¬ ed by a variable-length string. It is therefore possible to use a recurrent network â the controller â to generate such string. Training the network speciï¬ ed by the string â the â child networkâ â
1611.01578#0
1611.01578#2
1611.01578
[ "1611.01462" ]
1611.01578#2
Neural Architecture Search with Reinforcement Learning
on the real data will result in an accuracy on a validation set. Using this accuracy as the reward signal, we can compute the policy gradient to update the controller. As a result, in the next iteration, the controller will give higher probabilities to architectures that receive high accuracies. In other words, the controller will learn to improve its search over time. Our experiments show that Neural Architecture Search can design good models from scratch, an achievement considered not possible with other methods. On image recognition with CIFAR-10, Neural Architecture Search can ï¬ nd a novel ConvNet model that is better than most human-invented architectures. Our CIFAR-10 model achieves a 3.65 test set error, while being 1.05x faster than the current best model. On language modeling with Penn Treebank, Neural Architecture Search can design a novel recurrent cell that is also better than previous RNN and LSTM architectures. The cell that our model found achieves a test set perplexity of 62.4 on the Penn Treebank dataset, which is 3.6 perplexity better than the previous state-of-the-art. # 2 RELATED WORK Hyperparameter optimization is an important research topic in machine learning, and is widely used in practice (Bergstra et al., 2011; Bergstra & Bengio, 2012; Snoek et al., 2012; 2015; Saxena & Verbeek, 2016). Despite their success, these methods are still limited in that they only search models from a ï¬
1611.01578#1
1611.01578#3
1611.01578
[ "1611.01462" ]
1611.01578#3
Neural Architecture Search with Reinforcement Learning
xed-length space. In other words, it is difï¬ cult to ask them to generate a variable-length conï¬ guration that speciï¬ es the structure and connectivity of a network. In practice, these methods often work better if they are supplied with a good initial model (Bergstra & Bengio, 2012; Snoek et al., 2012; 2015). There are Bayesian optimization methods that allow to search non ï¬ xed length architectures (Bergstra et al., 2013; Mendoza et al., 2016), but they are less general and less ï¬ exible than the method proposed in this paper. Modern neuro-evolution algorithms, e.g., Wierstra et al. (2005); Floreano et al. (2008); Stanley et al. (2009), on the other hand, are much more ï¬ exible for composing novel models, yet they are usually less practical at a large scale. Their limitations lie in the fact that they are search-based methods, thus they are slow or require many heuristics to work well. Neural Architecture Search has some parallels to program synthesis and inductive programming, the idea of searching a program from examples (Summers, 1977; Biermann, 1978). In machine learning, probabilistic program induction has been used successfully in many settings, such as learning to solve simple Q&A (Liang et al., 2010; Neelakantan et al., 2015; Andreas et al., 2016), sort a list of numbers (Reed & de Freitas, 2015), and learning with very few examples (Lake et al., 2015). The controller in Neural Architecture Search is auto-regressive, which means it predicts hyperpa- rameters one a time, conditioned on previous predictions. This idea is borrowed from the decoder in end-to-end sequence to sequence learning (Sutskever et al., 2014). Unlike sequence to sequence learning, our method optimizes a non-differentiable metric, which is the accuracy of the child net- work. It is therefore similar to the work on BLEU optimization in Neural Machine Translation (Ran- zato et al., 2015; Shen et al., 2016). Unlike these approaches, our method learns directly from the reward signal without any supervised bootstrapping.
1611.01578#2
1611.01578#4
1611.01578
[ "1611.01462" ]
1611.01578#4
Neural Architecture Search with Reinforcement Learning
Also related to our work is the idea of learning to learn or meta-learning (Thrun & Pratt, 2012), a general framework of using information learned in one task to improve a future task. More closely related is the idea of using a neural network to learn the gradient descent updates for another net- work (Andrychowicz et al., 2016) and the idea of using reinforcement learning to ï¬ nd update policies for another network (Li & Malik, 2016). # 3 METHODS In the following section, we will ï¬
1611.01578#3
1611.01578#5
1611.01578
[ "1611.01462" ]
1611.01578#5
Neural Architecture Search with Reinforcement Learning
rst describe a simple method of using a recurrent network to generate convolutional architectures. We will show how the recurrent network can be trained with a policy gradient method to maximize the expected accuracy of the sampled architectures. We will present several improvements of our core approach such as forming skip connections to increase model complexity and using a parameter server approach to speed up training. In the last part of 2 # Under review as a conference paper at ICLR 2017 the section, we will focus on generating recurrent architectures, which is another key contribution of our paper. 3.1 GENERATE MODEL DESCRIPTIONS WITH A CONTROLLER RECURRENT NEURAL NETWORK In Neural Architecture Search, we use a controller to generate architectural hyperparameters of neural networks.
1611.01578#4
1611.01578#6
1611.01578
[ "1611.01462" ]
1611.01578#6
Neural Architecture Search with Reinforcement Learning
To be ï¬ exible, the controller is implemented as a recurrent neural network. Letâ s suppose we would like to predict feedforward neural networks with only convolutional layers, we can use the controller to generate their hyperparameters as a sequence of tokens: Number| | Filter *, lof Filtersf, | Height |, tf f Stride Number Filter Width J, Jof Filters), | Height |, x H A >< Layer N-1 Layer N Layer . . N+1 Figure 2:
1611.01578#5
1611.01578#7
1611.01578
[ "1611.01462" ]
1611.01578#7
Neural Architecture Search with Reinforcement Learning
How our controller recurrent neural network samples a simple convolutional network. It predicts ï¬ lter height, ï¬ lter width, stride height, stride width, and number of ï¬ lters for one layer and repeats. Every prediction is carried out by a softmax classiï¬ er and then fed into the next time step as input. In our experiments, the process of generating an architecture stops if the number of layers exceeds a certain value. This value follows a schedule where we increase it as training progresses. Once the controller RNN ï¬ nishes generating an architecture, a neural network with this architecture is built and trained. At convergence, the accuracy of the network on a held-out validation set is recorded. The parameters of the controller RNN, θc, are then optimized in order to maximize the expected validation accuracy of the proposed architectures. In the next section, we will describe a policy gradient method which we use to update parameters θc so that the controller RNN generates better architectures over time. # 3.2 TRAINING WITH REINFORCE The list of tokens that the controller predicts can be viewed as a list of actions a1:T to design an architecture for a child network. At convergence, this child network will achieve an accuracy R on a held-out dataset. We can use this accuracy R as the reward signal and use reinforcement learning to train the controller.
1611.01578#6
1611.01578#8
1611.01578
[ "1611.01462" ]
1611.01578#8
Neural Architecture Search with Reinforcement Learning
More concretely, to ï¬ nd the optimal architecture, we ask our controller to maximize its expected reward, represented by J(θc): J(θc) = EP (a1:T ;θc)[R] Since the reward signal R is non-differentiable, we need to use a policy gradient method to iteratively update θc. In this work, we use the REINFORCE rule from Williams (1992): T Vo. I (Be) = > pavers.) | Vo. log Plata); 9) Ri t=1 An empirical approximation of the above quantity is: m T 1 m YOY Vo. log P(arla(eâ 1).13 9) Re k=1t=1
1611.01578#7
1611.01578#9
1611.01578
[ "1611.01462" ]
1611.01578#9
Neural Architecture Search with Reinforcement Learning
Where m is the number of different architectures that the controller samples in one batch and T is the number of hyperparameters our controller has to predict to design a neural network architecture. 3 # Under review as a conference paper at ICLR 2017 The validation accuracy that the k-th neural network architecture achieves after being trained on a training dataset is Rk. The above update is an unbiased estimate for our gradient, but has a very high variance. In order to reduce the variance of this estimate we employ a baseline function: 1m mn So SS Vo, log P(aelayâ 1):1;9e)(Re â b) b=1 t= 1 As long as the baseline function b does not depend on the on the current action, then this is still an unbiased gradient estimate. In this work, our baseline b is an exponential moving average of the previous architecture accuracies. Accelerate Training with Parallelism and Asynchronous Updates: In Neural Architecture Search, each gradient update to the controller parameters θc corresponds to training one child net- work to convergence. As training a child network can take hours, we use distributed training and asynchronous parameter updates in order to speed up the learning process of the controller (Dean et al., 2012). We use a parameter-server scheme where we have a parameter server of S shards, that store the shared parameters for K controller replicas. Each controller replica samples m different child architectures that are trained in parallel. The controller then collects gradients according to the results of that minibatch of m architectures at convergence and sends them to the parameter server in order to update the weights across all controller replicas. In our implementation, convergence of each child network is reached when its training exceeds a certain number of epochs. This scheme of parallelism is summarized in Figure 3. Parameter| [Parameter] | |, [Parameter] Server 1 Server 2 Server S Parameters Accuracy R Child Child Child Child Child Child Child Child Child Replica 1 | | Replica 2 Replica m Replica 1| | Replica 2 Replica m Replica 1| | Replica 2 Replica m Figure 3: Distributed training for Neural Architecture Search. We use a set of S parameter servers to store and send parameters to K controller replicas. Each controller replica then samples m archi- tectures and run the multiple child models in parallel.
1611.01578#8
1611.01578#10
1611.01578
[ "1611.01462" ]
1611.01578#10
Neural Architecture Search with Reinforcement Learning
The accuracy of each child model is recorded to compute the gradients with respect to θc, which are then sent back to the parameter servers. INCREASE ARCHITECTURE COMPLEXITY WITH SKIP CONNECTIONS AND OTHER LAYER TYPES In Section 3.1, the search space does not have skip connections, or branching layers used in modern architectures such as GoogleNet (Szegedy et al., 2015), and Residual Net (He et al., 2016a). In this section we introduce a method that allows our controller to propose skip connections or branching layers, thereby widening the search space. To enable the controller to predict such connections, we use a set-selection type attention (Neelakan- tan et al., 2015) which was built upon the attention mechanism (Bahdanau et al., 2015; Vinyals et al., 2015). At layer N , we add an anchor point which has N â 1 content-based sigmoids to indicate the previous layers that need to be connected. Each sigmoid is a function of the current hiddenstate of the controller and the previous hiddenstates of the previous N â 1 anchor points: P(Layer j is an input to layer i) = sigmoid(vTtanh(Wprev â hj + Wcurr â hi)), where hj represents the hiddenstate of the controller at anchor point for the j-th layer, where j ranges from 0 to N â 1. We then sample from these sigmoids to decide what previous layers to be used as inputs to the current layer. The matrices Wprev, Wcurr and v are trainable parameters. As 4 # Under review as a conference paper at ICLR 2017
1611.01578#9
1611.01578#11
1611.01578
[ "1611.01462" ]
1611.01578#11
Neural Architecture Search with Reinforcement Learning
these connections are also deï¬ ned by probability distributions, the REINFORCE method still applies without any signiï¬ cant modiï¬ cations. Figure 4 shows how the controller uses skip connections to decide what layers it wants as inputs to the current layer. N-1 skip connections Number] | Anchor | | Filter Stride | | Anchor | |Number| | Filter *, Jof Filters[, | Point ,| Height ), 5 f,| Width f, | Point f; JofFiltersf, | Height /, H ; ; >| >| : : : : t t Layer N-1 Layer N Layer N+1 Figure 4: The controller uses anchor points, and set-selection attention to form skip connections. In our framework, if one layer has many input layers then all input layers are concatenated in the depth dimension. Skip connections can cause â compilation failuresâ where one layer is not compat- ible with another layer, or one layer may not have any input or output. To circumvent these issues, we employ three simple techniques. First, if a layer is not connected to any input layer then the image is used as the input layer. Second, at the ï¬ nal layer we take all layer outputs that have not been connected and concatenate them before sending this ï¬ nal hiddenstate to the classiï¬ er.
1611.01578#10
1611.01578#12
1611.01578
[ "1611.01462" ]
1611.01578#12
Neural Architecture Search with Reinforcement Learning
Lastly, if input layers to be concatenated have different sizes, we pad the small layers with zeros so that the concatenated layers have the same sizes. Finally, in Section 3.1, we do not predict the learning rate and we also assume that the architectures consist of only convolutional layers, which is also quite restrictive. It is possible to add the learning rate as one of the predictions. Additionally, it is also possible to predict pooling, local contrast normalization (Jarrett et al., 2009; Krizhevsky et al., 2012), and batchnorm (Ioffe & Szegedy, 2015) in the architectures. To be able to add more types of layers, we need to add an additional step in the controller RNN to predict the layer type, then other hyperparameters associated with it. 3.4 GENERATE RECURRENT CELL ARCHITECTURES In this section, we will modify the above method to generate recurrent cells. At every time step t, the controller needs to ï¬ nd a functional form for ht that takes xt and htâ 1 as inputs. The simplest way is to have ht = tanh(W1 â xt +W2 â htâ
1611.01578#11
1611.01578#13
1611.01578
[ "1611.01462" ]
1611.01578#13
Neural Architecture Search with Reinforcement Learning
1), which is the formulation of a basic recurrent cell. A more complicated formulation is the widely-used LSTM recurrent cell (Hochreiter & Schmidhuber, 1997). The computations for basic RNN and LSTM cells can be generalized as a tree of steps that take xt and htâ 1 as inputs and produce ht as ï¬ nal output. The controller RNN needs to label each node in the tree with a combination method (addition, elementwise multiplication, etc.) and an activation function (tanh, sigmoid, etc.) to merge two inputs and produce one output. Two outputs are then fed as inputs to the next node in the tree. To allow the controller RNN to select these methods and functions, we index the nodes in the tree in an order so that the controller RNN can visit each node one by one and label the needed hyperparameters. Inspired by the construction of the LSTM cell (Hochreiter & Schmidhuber, 1997), we also need cell variables ctâ 1 and ct to represent the memory states. To incorporate these variables, we need the controller RNN to predict what nodes in the tree to connect these two variables to. These predictions can be done in the last two blocks of the controller RNN. To make this process more clear, we show an example in Figure 5, for a tree structure that has two leaf nodes and one internal node. The leaf nodes are indexed by 0 and 1, and the internal node is indexed by 2.
1611.01578#12
1611.01578#14
1611.01578
[ "1611.01462" ]
1611.01578#14
Neural Architecture Search with Reinforcement Learning
The controller RNN needs to ï¬ rst predict 3 blocks, each block specifying a combina- tion method and an activation function for each tree index. After that it needs to predict the last 2 blocks that specify how to connect ct and ctâ 1 to temporary variables inside the tree. Speciï¬ cally, 5 # Under review as a conference paper at ICLR 2017 he he & £5 BRBSRRRRG oe tit it i Index 2 we relu rN < â > < â + < â » < â » < - Mer Xt Nea Xt â Teeindexoâ â Treeindexaâ â Treetndex2â ~Cell inject Cell indices Tree Tree Index 0 Index 1 elem_mult, Figure 5: An example of a recurrent cell constructed from a tree that has two leaf nodes (base 2) and one internal node. Left: the tree that deï¬
1611.01578#13
1611.01578#15
1611.01578
[ "1611.01462" ]
1611.01578#15
Neural Architecture Search with Reinforcement Learning
nes the computation steps to be predicted by controller. Center: an example set of predictions made by the controller for each computation step in the tree. Right: the computation graph of the recurrent cell constructed from example predictions of the controller. according to the predictions of the controller RNN in this example, the following computation steps will occur: â ¢ The controller predicts Add and T anh for tree index 0, this means we need to compute a0 = tanh(W1 â xt + W2 â htâ 1). e The controller predicts ElemMult and ReLU for tree index 1, this means we need to compute a, = ReLU((W3 * 21) © (Wa hyâ 1)). â ¢ The controller predicts 0 for the second element of the â Cell Indexâ , Add and ReLU for 0 = ReLU(a0 + ctâ 1). elements in â Cell Injectâ , which means we need to compute anew Notice that we donâ
1611.01578#14
1611.01578#16
1611.01578
[ "1611.01462" ]
1611.01578#16
Neural Architecture Search with Reinforcement Learning
t have any learnable parameters for the internal nodes of the tree. e The controller predicts ElemMult and Sigmoid for tree index 2, this means we need to compute az = sigmoid(aj*â â © a1). Since the maximum index in the tree is 2, hy is set to a2. e The controller RNN predicts 1 for the first element of the â Cell Indexâ , this means that we should set c; to the output of the tree at index 1 before the activation, i.e., c, = (W3 * 21) © (W4 * hy-1). In the above example, the tree has two leaf nodes, thus it is called a â base 2â architecture. In our experiments, we use a base number of 8 to make sure that the cell is expressive. # 4 EXPERIMENTS AND RESULTS We apply our method to an image classiï¬ cation task with CIFAR-10 and a language modeling task with Penn Treebank, two of the most benchmarked datasets in deep learning. On CIFAR-10, our goal is to ï¬ nd a good convolutional architecture whereas on Penn Treebank our goal is to ï¬ nd a good recurrent cell. On each dataset, we have a separate held-out validation dataset to compute the reward signal. The reported performance on the test set is computed only once for the network that achieves the best result on the held-out validation dataset. More details about our experimental procedures and results are as follows. 4.1 LEARNING CONVOLUTIONAL ARCHITECTURES FOR CIFAR-10 Dataset: In these experiments we use the CIFAR-10 dataset with data preprocessing and aug- mentation procedures that are in line with other previous results.
1611.01578#15
1611.01578#17
1611.01578
[ "1611.01462" ]
1611.01578#17
Neural Architecture Search with Reinforcement Learning
We ï¬ rst preprocess the data by whitening all the images. Additionally, we upsample each image then choose a random 32x32 crop of this upsampled image. Finally, we use random horizontal ï¬ ips on this 32x32 cropped image. Search space: Our search space consists of convolutional architectures, with rectiï¬ ed linear units as non-linearities (Nair & Hinton, 2010), batch normalization (Ioffe & Szegedy, 2015) and skip connections between layers (Section 3.3). For every convolutional layer, the controller RNN has to select a ï¬ lter height in [1, 3, 5, 7], a ï¬ lter width in [1, 3, 5, 7], and a number of ï¬ lters in [24, 36, 48, 6
1611.01578#16
1611.01578#18
1611.01578
[ "1611.01462" ]
1611.01578#18
Neural Architecture Search with Reinforcement Learning
# Under review as a conference paper at ICLR 2017 64]. For strides, we perform two sets of experiments, one where we ï¬ x the strides to be 1, and one where we allow the controller to predict the strides in [1, 2, 3]. Training details: The controller RNN is a two-layer LSTM with 35 hidden units on each layer. It is trained with the ADAM optimizer (Kingma & Ba, 2015) with a learning rate of 0.0006. The weights of the controller are initialized uniformly between -0.08 and 0.08. For the distributed train- ing, we set the number of parameter server shards S to 20, the number of controller replicas K to 100 and the number of child replicas m to 8, which means there are 800 networks being trained on 800 GPUs concurrently at any time. Once the controller RNN samples an architecture, a child model is constructed and trained for 50 epochs. The reward used for updating the controller is the maximum validation accuracy of the last 5 epochs cubed. The validation set has 5,000 examples randomly sampled from the training set, the remaining 45,000 examples are used for training. The settings for training the CIFAR-10 child models are the same with those used in Huang et al. (2016a). We use the Momentum Optimizer with a learning rate of 0.1, weight decay of 1e-4, momentum of 0.9 and used Nesterov Momentum (Sutskever et al., 2013). During the training of the controller, we use a schedule of increasing number of layers in the child networks as training progresses. On CIFAR-10, we ask the controller to increase the depth by 2 for the child models every 1,600 samples, starting at 6 layers.
1611.01578#17
1611.01578#19
1611.01578
[ "1611.01462" ]
1611.01578#19
Neural Architecture Search with Reinforcement Learning
Results: After the controller trains 12,800 architectures, we ï¬ nd the architecture that achieves the best validation accuracy. We then run a small grid search over learning rate, weight decay, batchnorm epsilon and what epoch to decay the learning rate. The best model from this grid search is then run until convergence and we then compute the test accuracy of such model and summarize the results in Table 1. As can be seen from the table, Neural Architecture Search can design several promising architectures that perform as well as some of the best models on this dataset.
1611.01578#18
1611.01578#20
1611.01578
[ "1611.01462" ]
1611.01578#20
Neural Architecture Search with Reinforcement Learning
Model Depth Parameters Error rate (%) Network in Network (Lin et al., 2013) All-CNN (Springenberg et al., 2014) Deeply Supervised Net (Lee et al., 2015) Highway Network (Srivastava et al., 2015) Scalable Bayesian Optimization (Snoek et al., 2015) - - - - - - - - - - 8.81 7.25 7.97 7.72 6.37 FractalNet (Larsson et al., 2016) with Dropout/Drop-path 21 21 38.6M 38.6M 5.22 4.60 ResNet (He et al., 2016a) 110 1.7M 6.61 ResNet (reported by Huang et al. (2016c)) 110 1.7M 6.41 ResNet with Stochastic Depth (Huang et al., 2016c) 110 1202 1.7M 10.2M 5.23 4.91 Wide ResNet (Zagoruyko & Komodakis, 2016) 16 28 11.0M 36.5M 4.81 4.17 ResNet (pre-activation) (He et al., 2016b) DenseNet (L = 40, k = 12) Huang et al. (2016a) DenseNet(L = 100, k = 12) Huang et al. (2016a) DenseNet (L = 100, k = 24) Huang et al. (2016a) DenseNet-BC (L = 100, k = 40) Huang et al. (2016b) 164 1001 40 100 100 190 1.7M 10.2M 1.0M 7.0M 27.2M 25.6M 5.46 4.62 5.24 4.10 3.74 3.46 Neural Architecture Search v1 no stride or pooling Neural Architecture Search v2 predicting strides Neural Architecture Search v3 max pooling Neural Architecture Search v3 max pooling + more ï¬
1611.01578#19
1611.01578#21
1611.01578
[ "1611.01462" ]
1611.01578#21
Neural Architecture Search with Reinforcement Learning
lters 15 20 39 39 4.2M 2.5M 7.1M 37.4M 5.50 6.01 4.47 3.65 Table 1: Performance of Neural Architecture Search and other state-of-the-art models on CIFAR-10. 7 # Under review as a conference paper at ICLR 2017 First, if we ask the controller to not predict stride or pooling, it can design a 15-layer architecture that achieves 5.50% error rate on the test set. This architecture has a good balance between accuracy and depth. In fact, it is the shallowest and perhaps the most inexpensive architecture among the top performing networks in this table. This architecture is shown in Appendix A, Figure 7. A notable feature of this architecture is that it has many rectangular ï¬ lters and it prefers larger ï¬ lters at the top layers. Like residual networks (He et al., 2016a), the architecture also has many one-step skip connections. This architecture is a local optimum in the sense that if we perturb it, its performance becomes worse. For example, if we densely connect all layers with skip connections, its performance becomes slightly worse: 5.56%. If we remove all skip connections, its performance drops to 7.97%. In the second set of experiments, we ask the controller to predict strides in addition to other hyper- parameters. As stated earlier, this is more challenging because the search space is larger.
1611.01578#20
1611.01578#22
1611.01578
[ "1611.01462" ]
1611.01578#22
Neural Architecture Search with Reinforcement Learning
In this case, it ï¬ nds a 20-layer architecture that achieves 6.01% error rate on the test set, which is not much worse than the ï¬ rst set of experiments. Finally, if we allow the controller to include 2 pooling layers at layer 13 and layer 24 of the archi- tectures, the controller can design a 39-layer network that achieves 4.47% which is very close to the best human-invented architecture that achieves 3.74%. To limit the search space complexity we have our model predict 13 layers where each layer prediction is a fully connected block of 3 layers. Additionally, we change the number of ï¬ lters our model can predict from [24, 36, 48, 64] to [6, 12, 24, 36]. Our result can be improved to 3.65% by adding 40 more ï¬ lters to each layer of our archi- tecture. Additionally this model with 40 ï¬ lters added is 1.05x as fast as the DenseNet model that achieves 3.74%, while having better performance. The DenseNet model that achieves 3.46% error rate (Huang et al., 2016b) uses 1x1 convolutions to reduce its total number of parameters, which we did not do, so it is not an exact comparison. 4.2 LEARNING RECURRENT CELLS FOR PENN TREEBANK Dataset: We apply Neural Architecture Search to the Penn Treebank dataset, a well-known bench- mark for language modeling. On this task, LSTM architectures tend to excel (Zaremba et al., 2014; Gal, 2015), and improving them is difï¬ cult (Jozefowicz et al., 2015). As PTB is a small dataset, reg- ularization methods are needed to avoid overï¬ tting. First, we make use of the embedding dropout and recurrent dropout techniques proposed in Zaremba et al. (2014) and (Gal, 2015). We also try to combine them with the method of sharing Input and Output embeddings, e.g., Bengio et al. (2003); Mnih & Hinton (2007), especially Inan et al. (2016) and Press & Wolf (2016).
1611.01578#21
1611.01578#23
1611.01578
[ "1611.01462" ]
1611.01578#23
Neural Architecture Search with Reinforcement Learning
Results with this method are marked with â shared embeddings.â Search space: Following Section 3.4, our controller sequentially predicts a combination method then an activation function for each node in the tree. For each node in the tree, the controller RNN needs to select a combination method in [add, elem mult] and an activation method in [identity, tanh, sigmoid, relu]. The number of input pairs to the RNN cell is called the â base numberâ and set to 8 in our experiments. When the base number is 8, the search space is has ap- proximately 6 Ã 1016 architectures, which is much larger than 15,000, the number of architectures that we allow our controller to evaluate.
1611.01578#22
1611.01578#24
1611.01578
[ "1611.01462" ]
1611.01578#24
Neural Architecture Search with Reinforcement Learning
Training details: The controller and its training are almost identical to the CIFAR-10 experiments except for a few modiï¬ cations: 1) the learning rate for the controller RNN is 0.0005, slightly smaller than that of the controller RNN in CIFAR-10, 2) in the distributed training, we set S to 20, K to 400 and m to 1, which means there are 400 networks being trained on 400 CPUs concurrently at any time, 3) during asynchronous training we only do parameter updates to the parameter-server once 10 gradients from replicas have been accumulated. In our experiments, every child model is constructed and trained for 35 epochs. Every child model has two layers, with the number of hidden units adjusted so that total number of learnable parameters approximately match the â
1611.01578#23
1611.01578#25
1611.01578
[ "1611.01462" ]
1611.01578#25
Neural Architecture Search with Reinforcement Learning
mediumâ baselines (Zaremba et al., 2014; Gal, 2015). In these experi- ments we only have the controller predict the RNN cell structure and ï¬ x all other hyperparameters. The reward function is After the controller RNN is done training, we take the best RNN cell according to the lowest val- idation perplexity and then run a grid search over learning rate, weight initialization, dropout rates 8 # Under review as a conference paper at ICLR 2017
1611.01578#24
1611.01578#26
1611.01578
[ "1611.01462" ]
1611.01578#26
Neural Architecture Search with Reinforcement Learning
and decay epoch. The best cell found was then run with three different conï¬ gurations and sizes to increase its capacity. Results: In Table 2, we provide a comprehensive list of architectures and their performance on the PTB dataset. As can be seen from the table, the models found by Neural Architecture Search outperform other state-of-the-art models on this dataset, and one of our best models achieves a gain of almost 3.6 perplexity. Not only is our cell is better, the model that achieves 64 perplexity is also more than two times faster because the previous best network requires running a cell 10 times per time step (Zilly et al., 2016). Model Parameters Test Perplexity Mikolov & Zweig (2012) - KN-5 Mikolov & Zweig (2012) - KN5 + cache Mikolov & Zweig (2012) - RNN Mikolov & Zweig (2012) - RNN-LDA Mikolov & Zweig (2012) - RNN-LDA + KN-5 + cache Pascanu et al. (2013) - Deep RNN Cheng et al. (2014) - Sum-Prod Net Zaremba et al. (2014) - LSTM (medium) Zaremba et al. (2014) - LSTM (large) Gal (2015) - Variational LSTM (medium, untied) Gal (2015) - Variational LSTM (medium, untied, MC) Gal (2015) - Variational LSTM (large, untied) Gal (2015) - Variational LSTM (large, untied, MC) Kim et al. (2015) - CharCNN Press & Wolf (2016) - Variational LSTM, shared embeddings Merity et al. (2016) - Zoneout + Variational LSTM (medium) Merity et al. (2016) - Pointer Sentinel-LSTM (medium) Inan et al. (2016) - VD-LSTM + REAL (large) Zilly et al. (2016) - Variational RHN, shared embeddings 2Mâ
1611.01578#25
1611.01578#27
1611.01578
[ "1611.01462" ]
1611.01578#27
Neural Architecture Search with Reinforcement Learning
¡ 2Mâ ¡ 6Mâ ¡ 7Mâ ¡ 9Mâ ¡ 6M 5Mâ ¡ 20M 66M 20M 20M 66M 66M 19M 51M 20M 21M 51M 24M 141.2 125.7 124.7 113.7 92.0 107.5 100.0 82.7 78.4 79.7 78.6 75.2 73.4 78.9 73.2 80.6 70.9 68.5 66.0 Neural Architecture Search with base 8 Neural Architecture Search with base 8 and shared embeddings Neural Architecture Search with base 8 and shared embeddings 32M 25M 54M 67.9 64.0 62.4 Table 2: Single model perplexity on the test set of the Penn Treebank language modeling task. Parameter numbers with â ¡ are estimates with reference to Merity et al. (2016).
1611.01578#26
1611.01578#28
1611.01578
[ "1611.01462" ]
1611.01578#28
Neural Architecture Search with Reinforcement Learning
The newly discovered cell is visualized in Figure 8 in Appendix A. The visualization reveals that the new cell has many similarities to the LSTM cell in the ï¬ rst few steps, such as it likes to compute W1 â htâ 1 + W2 â xt several times and send them to different components in the cell. Transfer Learning Results: To understand whether the cell can generalize to a different task, we apply it to the character language modeling task on the same dataset. We use an experimental setup that is similar to Ha et al. (2016), but use variational dropout by Gal (2015). We also train our own LSTM with our setup to get a fair LSTM baseline. Models are trained for 80K steps and the best test set perplexity is taken according to the step where validation set perplexity is the best. The results on the test set of our method and state-of-art methods are reported in Table 3. The results on small settings with 5-6M parameters conï¬ rm that the new cell does indeed generalize, and is better than the LSTM cell. Additionally, we carry out a larger experiment where the model has 16.28M parameters. This model has a weight decay rate of 1e â 4, was trained for 600K steps (longer than the above models) and the test perplexity is taken where the validation set perplexity is highest. We use dropout rates of 0.2 and 0.5 as described in Gal (2015), but do not use embedding dropout. We use the ADAM optimizer with a learning rate of 0.001 and an input embedding size of 128. Our model had two layers with 800 hidden units. We used a minibatch size of 32 and BPTT length of 100. With this setting, our model achieves 1.214 perplexity, which is the new state-of-the-art result on this task. Finally, we also drop our cell into the GNMT framework (Wu et al., 2016), which was previously tuned for LSTM cells, and train an WMT14 English â German translation model. The GNMT 9
1611.01578#27
1611.01578#29
1611.01578
[ "1611.01462" ]
1611.01578#29
Neural Architecture Search with Reinforcement Learning
# Under review as a conference paper at ICLR 2017 RNN Cell Type Ha et al. (2016) - Layer Norm HyperLSTM Ha et al. (2016) - Layer Norm HyperLSTM Large Embeddings Ha et al. (2016) - 2-Layer Norm HyperLSTM 4.92M 5.06M 14.41M 1.250 1.233 1.219 Two layer LSTM Two Layer with New Cell Two Layer with New Cell 6.57M 6.57M 16.28M 1.243 1.228 1.214 Table 3: Comparison between our cell and state-of-art methods on PTB character modeling. The new cell was found on word level language modeling. network has 8 layers in the encoder, 8 layers in the decoder.
1611.01578#28
1611.01578#30
1611.01578
[ "1611.01462" ]
1611.01578#30
Neural Architecture Search with Reinforcement Learning
The ï¬ rst layer of the encoder has bidirectional connections. The attention module is a neural network with 1 hidden layer. When a LSTM cell is used, the number of hidden units in each layer is 1024. The model is trained in a distributed setting with a parameter sever and 12 workers. Additionally, each worker uses 8 GPUs and a minibatch of 128. We use Adam with a learning rate of 0.0002 in the ï¬ rst 60K training steps, and SGD with a learning rate of 0.5 until 400K steps. After that the learning rate is annealed by dividing by 2 after every 100K steps until it reaches 0.1. Training is stopped at 800K steps.
1611.01578#29
1611.01578#31
1611.01578
[ "1611.01462" ]
1611.01578#31
Neural Architecture Search with Reinforcement Learning
More details can be found in Wu et al. (2016). In our experiment with the new cell, we make no change to the above settings except for dropping in the new cell and adjusting the hyperparameters so that the new model should have the same compu- tational complexity with the base model. The result shows that our cell, with the same computational complexity, achieves an improvement of 0.5 test set BLEU than the default LSTM cell. Though this improvement is not huge, the fact that the new cell can be used without any tuning on the existing GNMT framework is encouraging. We expect further tuning can help our cell perform better. Control Experiment 1 â Adding more functions in the search space: To test the robustness of Neural Architecture Search, we add max to the list of combination functions and sin to the list of activation functions and rerun our experiments. The results show that even with a bigger search space, the model can achieve somewhat comparable performance. The best architecture with max and sin is shown in Figure 8 in Appendix A. Control Experiment 2 â Comparison against Random Search: Instead of policy gradient, one can use random search to ï¬
1611.01578#30
1611.01578#32
1611.01578
[ "1611.01462" ]
1611.01578#32
Neural Architecture Search with Reinforcement Learning
nd the best network. Although this baseline seems simple, it is often very hard to surpass (Bergstra & Bengio, 2012). We report the perplexity improvements using policy gradient against random search as training progresses in Figure 6. The results show that not only the best model using policy gradient is better than the best model using random search, but also the average of top models is also much better. @â * Top_1_unique_models as||â * Top_5_unique_models e* Top_15_unique_models Perplexity Improvement 0 5000 70000 T5000 20000 725000 Iteration Figure 6: Improvement of Neural Architecture Search over random search over time. We plot the difference between the average of the top k models our controller ï¬ nds vs. random search every 400 models run. 10
1611.01578#31
1611.01578#33
1611.01578
[ "1611.01462" ]
1611.01578#33
Neural Architecture Search with Reinforcement Learning
# Under review as a conference paper at ICLR 2017 # 5 CONCLUSION In this paper we introduce Neural Architecture Search, an idea of using a recurrent neural network to compose neural network architectures. By using recurrent network as the controller, our method is ï¬ exible so that it can search variable-length architecture space. Our method has strong empirical per- formance on very challenging benchmarks and presents a new research direction for automatically ï¬ nding good neural network architectures. The code for running the models found by the controller on CIFAR-10 and PTB will be released at https://github.com/tensorï¬ ow/models . Additionally, we have added the RNN cell found using our method under the name NASCell into TensorFlow, so others can easily use it. ACKNOWLEDGMENTS We thank Greg Corrado, Jeff Dean, David Ha, Lukasz Kaiser and the Google Brain team for their help with the project. # REFERENCES Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein.
1611.01578#32
1611.01578#34
1611.01578
[ "1611.01462" ]
1611.01578#34
Neural Architecture Search with Reinforcement Learning
Learning to compose neural networks for question answering. In NAACL, 2016. Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474, 2016. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. JMLR, 2003. James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. JMLR, 2012. James Bergstra, R´emi Bardenet, Yoshua Bengio, and Bal´azs K´egl. Algorithms for hyper-parameter optimization. In NIPS, 2011. James Bergstra, Daniel Yamins, and David D Cox. Making a science of model search: Hyperpa- rameter optimization in hundreds of dimensions for vision architectures. ICML, 2013.
1611.01578#33
1611.01578#35
1611.01578
[ "1611.01462" ]
1611.01578#35
Neural Architecture Search with Reinforcement Learning
Alan W. Biermann. The inference of regular LISP programs from examples. IEEE transactions on Systems, Man, and Cybernetics, 1978. Wei-Chen Cheng, Stanley Kok, Hoai Vu Pham, Hai Leong Chieu, and Kian Ming Adam Chai. Language modeling with sum-product networks. In INTERSPEECH, 2014. Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, et al. Large scale distributed deep networks. In NIPS, 2012.
1611.01578#34
1611.01578#36
1611.01578
[ "1611.01462" ]
1611.01578#36
Neural Architecture Search with Reinforcement Learning
Dario Floreano, Peter D¨urr, and Claudio Mattiussi. Neuroevolution: from architectures to learning. Evolutionary Intelligence, 2008. Yarin Gal. A theoretically grounded application of dropout in recurrent neural networks. arXiv preprint arXiv:1512.05287, 2015. David Ha, Andrew Dai, and Quoc V. Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, 2016a. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016b.
1611.01578#35
1611.01578#37
1611.01578
[ "1611.01462" ]
1611.01578#37
Neural Architecture Search with Reinforcement Learning
11 # Under review as a conference paper at ICLR 2017 Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012. Sepp Hochreiter and Juergen Schmidhuber. Long short-term memory. Neural Computation, 1997.
1611.01578#36
1611.01578#38
1611.01578
[ "1611.01462" ]
1611.01578#38
Neural Architecture Search with Reinforcement Learning
Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016a. Gao Huang, Zhuang Liu, Kilian Q. Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016b. Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochas- tic depth. arXiv preprint arXiv:1603.09382, 2016c. Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classiï¬ ers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462, 2016. Sergey Ioffe and Christian Szegedy.
1611.01578#37
1611.01578#39
1611.01578
[ "1611.01462" ]
1611.01578#39
Neural Architecture Search with Reinforcement Learning
Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. Kevin Jarrett, Koray Kavukcuoglu, Yann Lecun, et al. What is the best multi-stage architecture for object recognition? In ICCV, 2009. Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. In ICML, 2015. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M.
1611.01578#38
1611.01578#40
1611.01578
[ "1611.01462" ]
1611.01578#40
Neural Architecture Search with Reinforcement Learning
Rush. Character-aware neural language models. arXiv preprint arXiv:1508.06615, 2015. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classiï¬ cation with deep convo- lutional neural networks. In NIPS, 2012. Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum.
1611.01578#39
1611.01578#41
1611.01578
[ "1611.01462" ]
1611.01578#41
Neural Architecture Search with Reinforcement Learning
Human-level concept learning through probabilistic program induction. Science, 2015. Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net- works without residuals. arXiv preprint arXiv:1605.07648, 2016. Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. In AISTATS, 2015. Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016. Percy Liang, Michael I. Jordan, and Dan Klein. Learning programs: A hierarchical Bayesian ap- proach. In ICML, 2010. Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In ICLR, 2013.
1611.01578#40
1611.01578#42
1611.01578
[ "1611.01462" ]