id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
1511.06488#28 | Resiliency of Deep Neural Networks under Quantization | Speciï¬ cally, by varying the size, we ï¬ nd the number of total parameters of the ï¬ oating-point network that shows the same accuracy as the quantized one. After that, the effective uncompressed size can be computed by multiplying 32 bits to the effective number of parameters. Once we get the corresponding effective uncompressed size for the speciï¬ c network size and the number of quantization bits, the ECR can be computed by (6). The ECRs for the direct and retrain- based quantization for various network sizes and quantization bits are shown in Figure 9. For the direct quantization, 5 bit quantization shows the best ECR except for the layer size of 1024. On the other hand, even 2 bit quantization performs better than the others after retraining. That is, after retraining, a bigger network with extreme ternary (2 bit) quantization is more efï¬ cient in terms of 9 | 1511.06488#27 | 1511.06488#29 | 1511.06488 | [
"1505.00256"
] |
1511.06488#29 | Resiliency of Deep Neural Networks under Quantization | # Under review as a conference paper at ICLR 2016 the memory usage for weights than any other smaller networks with higher quantization bits when they are compared at the same accuracy. # 6 DISCUSSION In this study, we control the network size by changing the number of units in the hidden layers, the number of feature maps, or the number of levels. At any case, reduced complexity lowers the resiliency to quantization. We are now conducting similar experiments to the recurrent neural networks that are known to be more sensitive to quantization (Shin et al., 2015). This work seems to be directly related to several network optimization methods, such as pruning, fault tolerance, and decomposition (Yu et al., 2012b; Han et al., 2015; Xue et al., 2013; Rigamonti et al., 2013). In the pruning, retraining of weights is conducted after zeroing small valued weights. The effects of pruning, fault tolerance, and network decomposition efï¬ ciency would be dependent on the redundant representation capability of DNNs. This study can be applied to hardware efï¬ cient DNN design. For design with limited hardware resources, when the size of the reference DNN is relatively small, it is advised to employ a very low-precision arithmetic and, instead, increase the network complexity as much as the hardware capacity allows. But, when the DNNs are in the performance saturation region, this strategy does not always gain much because growing the â already-bigâ network size brings almost no performance advantages. This can be observed in Figure 7b and Figure 9b where 6 bit quantization performed best at the largest layer size (1,024). # 7 CONCLUSION We analyze the performance of ï¬ xed-point deep neural networks, an FFDNN for phoneme recogni- tion and a CNN for image classiï¬ cation, while not only changing the arithmetic precision but also varying their network complexity. The low-precision networks for this analysis are obtained by us- ing the retrain based quantization method, and the network complexity is controlled by changing the conï¬ gurations of the hidden layers or feature maps. The performance gap between the ï¬ oating- point and the ï¬ | 1511.06488#28 | 1511.06488#30 | 1511.06488 | [
"1505.00256"
] |
1511.06488#30 | Resiliency of Deep Neural Networks under Quantization | xed-point neural networks with ternary weights (+1, 0, -1) almost vanishes when the DNNs are in the performance saturation region for the given training data. However, when the complexity of DNNs are reduced, by lowering either the number of units, feature maps, or hidden layers, the performance gap between them increases. In other words, a large size network that may contain redundant representation capability for the given training data does not hurt by the lowered precision, but a very compact network does. | 1511.06488#29 | 1511.06488#31 | 1511.06488 | [
"1505.00256"
] |
1511.06488#31 | Resiliency of Deep Neural Networks under Quantization | # ACKNOWLEDGMENTS This work was supported in part by the Brain Korea 21 Plus Project and the National Re- search Foundation of Korea (NRF) grants funded by the Korea government (MSIP) (No. 2015R1A2A1A10056051). # REFERENCES Anwar, Sajid, Hwang, Kyuyeon, and Sung, Wonyong. Fixed point optimization of deep convo- In Acoustics, Speech and Signal Processing lutional neural networks for object recognition. (ICASSP), 2015 IEEE International Conference on, pp. 1131â | 1511.06488#30 | 1511.06488#32 | 1511.06488 | [
"1505.00256"
] |
1511.06488#32 | Resiliency of Deep Neural Networks under Quantization | 1135. IEEE, 2015. Chen, Chenyi, Seff, Ari, Kornhauser, Alain, and Xiao, Jianxiong. Deepdriving: Learning affordance for direct perception in autonomous driving. arXiv preprint arXiv:1505.00256, 2015. Corradini, Maria Letizia, Giantomassi, Andrea, Ippoliti, Gianluca, Longhi, Sauro, and Orlando, Giuseppe. Robust control of robot arms via quasi sliding modes and neural networks. In Advances and Applications in Sliding Mode Control systems, pp. 79â | 1511.06488#31 | 1511.06488#33 | 1511.06488 | [
"1505.00256"
] |
1511.06488#33 | Resiliency of Deep Neural Networks under Quantization | 105. Springer, 2015. Courbariaux, Matthieu, Bengio, Yoshua, and David, Jean-Pierre. Binaryconnect: Training deep neu- ral networks with binary weights during propagations. arXiv preprint arXiv:1511.00363, 2015. 10 # Under review as a conference paper at ICLR 2016 Fiesler, Emile, Choudry, Amar, and Caulï¬ eld, H John. | 1511.06488#32 | 1511.06488#34 | 1511.06488 | [
"1505.00256"
] |
1511.06488#34 | Resiliency of Deep Neural Networks under Quantization | Weight discretization paradigm for optical neural networks. In The Hagueâ 90, 12-16 April, pp. 164â 173. International Society for Optics and Photonics, 1990. Han, Song, Mao, Huizi, and Dally, William J. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. 2015. Holt, Jordan L and Baker, Thomas E. Back propagation simulations using limited precision calcula- tions. In Neural Networks, 1991., IJCNN-91-Seattle International Joint Conference on, volume 2, pp. 121â 126. IEEE, 1991. Hussain, B Zahir M et al. Short word-length lms ï¬ ltering. In Signal Processing and Its Applications, 2007. ISSPA 2007. 9th International Symposium on, pp. 1â 4. IEEE, 2007. Hwang, Kyuyeon and Sung, Wonyong. | 1511.06488#33 | 1511.06488#35 | 1511.06488 | [
"1505.00256"
] |
1511.06488#35 | Resiliency of Deep Neural Networks under Quantization | Fixed-point feedforward deep neural network design using weights +1, 0, and -1. In Signal Processing Systems (SiPS), 2014 IEEE Workshop on, pp. 1â 6. IEEE, 2014. Jalab, Hamid A, Omer, Herman, et al. Human computer interface using hand gesture recognition based on neural network. In Information Technology: Towards New Smart World (NSITNSW), 2015 5th National Symposium on, pp. 1â 6. IEEE, 2015. | 1511.06488#34 | 1511.06488#36 | 1511.06488 | [
"1505.00256"
] |
1511.06488#36 | Resiliency of Deep Neural Networks under Quantization | Kim, Jonghong, Hwang, Kyuyeon, and Sung, Wonyong. X1000 real-time phoneme recognition In Acoustics, Speech and Signal Processing VLSI using feed-forward deep neural networks. (ICASSP), 2014 IEEE International Conference on, pp. 7510â 7514. IEEE, 2014. Krizhevskey, A. CUDA-convnet, 2014. Moerland, Perry and Fiesler, Emile. Neural network adaptations to hardware implementations. Technical report, IDIAP, 1997. | 1511.06488#35 | 1511.06488#37 | 1511.06488 | [
"1505.00256"
] |
1511.06488#37 | Resiliency of Deep Neural Networks under Quantization | Ovtcharov, Kalin, Ruwase, Olatunji, Kim, Joo-Young, Fowers, Jeremy, Strauss, Karin, and Chung, Eric S. Accelerating deep convolutional neural networks using specialized hardware. Microsoft Research Whitepaper, 2, 2015. Rigamonti, Roberto, Sironi, Amos, Lepetit, Vincent, and Fua, Pascal. Learning separable ï¬ lters. | 1511.06488#36 | 1511.06488#38 | 1511.06488 | [
"1505.00256"
] |
1511.06488#38 | Resiliency of Deep Neural Networks under Quantization | In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 2754â 2761. IEEE, 2013. Sak, Has¸im, Senior, Andrew, Rao, Kanishka, and Beaufays, Franc¸oise. Fast and accurate recurrent neural network acoustic models for speech recognition. arXiv preprint arXiv:1507.06947, 2015. Shin, Sungho, Hwang, Kyuyeon, and Sung, Wonyong. Fixed point performance analysis of recurrent neural networks. arXiv preprint arXiv:1512.01322, 2015. Sung, Wonyong and Kum, Ki-II. | 1511.06488#37 | 1511.06488#39 | 1511.06488 | [
"1505.00256"
] |
1511.06488#39 | Resiliency of Deep Neural Networks under Quantization | Simulation-based word-length optimization method for ï¬ xed-point digital signal processing systems. Signal Processing, IEEE Transactions on, 43(12):3087â 3090, 1995. Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. Xue, Jian, Li, Jinyu, and Gong, Yifan. Restructuring of deep neural network acoustic models with singular value decomposition. In INTERSPEECH, pp. 2365â 2369, 2013. | 1511.06488#38 | 1511.06488#40 | 1511.06488 | [
"1505.00256"
] |
1511.06488#40 | Resiliency of Deep Neural Networks under Quantization | Yu, Dong, Deng, Alex Acero, Dahl, George, Seide, Frank, and Li, Gang. More data + deeper model = better accuracy. In keynote at International Workshop on Statistical Machine Learning for Speech Processing, 2012a. Yu, Dong, Seide, Frank, Li, Gang, and Deng, Li. Exploiting sparseness in deep neural networks for large vocabulary speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, pp. 4409â 4412. IEEE, 2012b. 11 | 1511.06488#39 | 1511.06488 | [
"1505.00256"
] |
|
1511.06297#0 | Conditional Computation in Neural Networks for faster models | 6 1 0 2 n a J 7 ] G L . s c [ 2 v 7 9 2 6 0 . 1 1 5 1 : v i X r a # Under review as a conference paper at ICLR 2016 # CONDITIONAL COMPUTATION IN NEURAL NETWORKS FOR FASTER MODELS Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau & Doina Precup School of Computer Science McGill University Montreal, Canada {ebengi,pbacon,jpineau,dprecup}@cs.mcgill.ca # ABSTRACT Deep learning has become the state-of-art tool in many applications, but the eval- uation and training of deep models can be time-consuming and computationally expensive. The conditional computation approach has been proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It operates by selectively activating only parts of the network at a time. In this paper, we use reinforcement learning as a tool to optimize conditional computation policies. | 1511.06297#1 | 1511.06297 | [
"1502.01852"
] |
|
1511.06297#1 | Conditional Computation in Neural Networks for faster models | More speciï¬ - cally, we cast the problem of learning activation-dependent policies for dropping out blocks of units as a reinforcement learning problem. We propose a learning scheme motivated by computation speed, capturing the idea of wanting to have parsimonious activations while maintaining prediction accuracy. We apply a pol- icy gradient algorithm for learning policies that optimize this loss function and propose a regularization mechanism that encourages diversiï¬ cation of the dropout policy. We present encouraging empirical results showing that this approach im- proves the speed of computation without impacting the quality of the approxima- tion. Keywords Neural Networks, Conditional Computing, REINFORCE | 1511.06297#0 | 1511.06297#2 | 1511.06297 | [
"1502.01852"
] |
1511.06297#2 | Conditional Computation in Neural Networks for faster models | 1 # INTRODUCTION Large-scale neural networks, and in particular deep learning architectures, have seen a surge in popularity in recent years, due to their impressive empirical performance in complex supervised learning tasks, including state-of-the-art performance in image and speech recognition (He et al., 2015). Yet the task of training such networks remains a challenging optimization problem. Several related problems arise: very long training time (several weeks on modern computers, for some prob- lems), potential for over-ï¬ tting (whereby the learned function is too speciï¬ c to the training data and generalizes poorly to unseen data), and more technically, the vanishing gradient problem (Hochre- iter, 1991; Bengio et al., 1994), whereby the gradient information gets increasingly diffuse as it propagates from layer to layer. Recent approaches (Bengio et al., 2013; Davis & Arel, 2013) have proposed the use of conditional computation in order to address this problem. Conditional computation refers to activating only some of the units in a network, in an input-dependent fashion. | 1511.06297#1 | 1511.06297#3 | 1511.06297 | [
"1502.01852"
] |
1511.06297#3 | Conditional Computation in Neural Networks for faster models | For example, if we think weâ re looking at a car, we only need to compute the activations of the vehicle detecting units, not of all features that a network could possible compute. The immediate effect of activating fewer units is that propagating information through the network will be faster, both at training as well as at test time. However, one needs to be able to decide in an intelligent fashion which units to turn on and off, depending on the input data. This is typically achieved with some form of gating structure, learned in parallel with the original network. A secondary effect of conditional computation is that during training, information will be propagated along fewer links. Intuitively, this allows sharper gradients on the links that do get activated. More- over, because only parts of the network are active, and fewer parameters are used in the computation, | 1511.06297#2 | 1511.06297#4 | 1511.06297 | [
"1502.01852"
] |
1511.06297#4 | Conditional Computation in Neural Networks for faster models | 1 # Under review as a conference paper at ICLR 2016 the net effect can be viewed as a form of regularization of the main network, as the approximator has to use only a small fraction of the possible parameters in order to produce an action. In this paper, we explore the formulation of conditional computation using reinforcement learning. We propose to learn input-dependent activation probabilities for every node (or blocks of nodes), while trying to jointly minimize the prediction errors at the output and the number of participating nodes at every layer, thus reducing the computational load. One can also think of our method as being related to standard dropout, which has been used as a tool to both regularize and speed up the computation. However, we emphasize that dropout is in fact a form of â | 1511.06297#3 | 1511.06297#5 | 1511.06297 | [
"1502.01852"
] |
1511.06297#5 | Conditional Computation in Neural Networks for faster models | unconditionalâ computation, in which the computation paths are data-independent. Therefore, usual dropout is less likely to lead to specialized computation paths within a network. We present the problem formulation, and our solution to the proposed optimization problem, us- ing policy search methods (Deisenroth et al., 2013). Preliminary results are included for standard classiï¬ cation benchmarks. # 2 PROBLEM FORMULATION Our model consists in a typical fully-connected neural network model, joined with stochastic per- layer policies that activate or deactivate nodes of the neural network in an input-dependent manner, both at train and test time. The exact algorithm is detailed in appendix A. We cast the problem of learning the input-dependent activation probabilities at each layer in the framework of Markov Decision Processes (MDP) (Puterman, 1994). We define a discrete time, continuous state and discrete action MDP (S,U, P(-|s,u),C). An action u â ¬ {0,1}* in this model consists in the application of a mask over the units of a given layer. We define the state space of the MDP over the vector-valued activations s â ¬ R* of all nodes at the previous layer. The cost C is the loss of the neural network architecture (in our case the negative log-likelihood). This MDP is single-step: an input is seen, an action is taken, a reward is observed and we are at the end state. Similarly to the way dropout is described (Hinton et al., 2012), each node or block in a given layer has an associated Bernoulli distribution which determines its probability of being activated. We train a different policy for each layer l, and parameterize it (separately of the neural network) such that it is input-dependent. For every layer l of k units, we deï¬ ne a policy as a k-dimensional Bernoulli distribution: k x (us) = Il ot (lâ o)t-), oi = [sigm(Zs + d)};, (1) i=1 where the Ï i denotes the participation probability, to be computed from the activations s of the layer below and the parameters θl = {Z(l), d(l)}. We denote the sigmoid function by sigm, the weight matrix by Z, and the bias vector by d. | 1511.06297#4 | 1511.06297#6 | 1511.06297 | [
"1502.01852"
] |
1511.06297#6 | Conditional Computation in Neural Networks for faster models | The output of a typical hidden layer h(x) that uses this policy is multiplied element-wise with the mask u sampled from the probabilities Ï , and becomes (h(x) â u). For clarity we did not superscript u, s and Ï i with l, but each layer has its own. # 3 LEARNING SIGMOID-BERNOULLI POLICIES We use REINFORCE (Williams, 1992) (detailed in appendix B) to learn the parameters Î Ï = {θ1, ..., θL} of the sigmoid-Bernoulli policies. Since the nature of the observation space changes at each decision step, we learn L disjoint policies (one for each layer l of the deep network). As a consequence, the summation in the policy gradient disappears and becomes: Vol = E{C(x)Vo, log a (u |s)} Q) since θl = {Z(l), d(l)} only appears in the l-th decision stage and the gradient is zero otherwise. Estimating (2) from samples requires propagating through many instances at a time, which we achieve through mini-batches of size mb . Under the mini-batch setting, s(l) becomes a matrix and Ï (· | ·) a vector of dimension mb . Taking the gradient of the parameters with respect to the | 1511.06297#5 | 1511.06297#7 | 1511.06297 | [
"1502.01852"
] |
1511.06297#7 | Conditional Computation in Neural Networks for faster models | 2 # Under review as a conference paper at ICLR 2016 log action probabilities can then be seen as forming a Jacobian. We can thus re-write the empirical average in matrix form: my 1 Val & â So Oxi) Vo, loge (ul | 80) = â e" Va, loge (UM | SO 3 £5, DCC) Vo own (ul? [9\?) = Foe" Vo logx (U9 18) â ) | 1511.06297#6 | 1511.06297#8 | 1511.06297 | [
"1502.01852"
] |
1511.06297#8 | Conditional Computation in Neural Networks for faster models | where C(x;) is the total cost for input x; and mp is the number of examples in the mini-batch. The term c! denotes the row vector containing the total costs for every example in the mini-batch. # 3.1 FAST VECTOR-JACOBIAN MULTIPLICATION While Eqn (3) suggests that the Jacobian might have to be formed explicitly, Pearlmutter (1994) showed that computing a differential derivative suffices to compute left or right vector-Jacobian (or Hessian) multiplication. | 1511.06297#7 | 1511.06297#9 | 1511.06297 | [
"1502.01852"
] |
1511.06297#9 | Conditional Computation in Neural Networks for faster models | The same trick has also recently been revived with the class of so- called â Hessian-freeâ (Martens, 2010) methods for artificial neural networks. Using the notation of Pearlmutter (1994), we write Ro, {-} = c' Vo, for the differential operator. 1 ~ Orr) Vol ~ Re, {log n(UY |S )} (4) 3.2 SPARSITY AND VARIANCE REGULARIZATIONS In order to favour activation policies with sparse actions, we add two penalty terms Lb and Le that depend on some target sparsity rate Ï | 1511.06297#8 | 1511.06297#10 | 1511.06297 | [
"1502.01852"
] |
1511.06297#10 | Conditional Computation in Neural Networks for faster models | . The ï¬ rst term pushes the policy distribution Ï to activate each unit with probability Ï in expectation over the data. The second term pushes the policy distribution to have the desired sparsity of activations for each example. Thus, for a low Ï , a valid conï¬ guration would be to learn a few high probability activations for some part of the data and low probability activations for the rest of the data, which results in having activation probability Ï in expectation. n 1 Ly = > \IE{o3} â rMlo De = E{I(â 9) 23) ~ TIl2} (5) j 5 | 1511.06297#9 | 1511.06297#11 | 1511.06297 | [
"1502.01852"
] |
1511.06297#11 | Conditional Computation in Neural Networks for faster models | Since we are in a minibatch setting, these expectations can be approximated over the minibatch: Roy me 1mm 4 Ly SO |â You) - Tle Le ® â SO MM(â Yo oy) - Te (6) 5 my My n j We ï¬ nally add a third term, Lv, in order to favour the aforementioned conï¬ gurations, where units only have a high probability of activation for certain examples, and low for the rest. We aim to max- imize the variances of activations of each unit, across the data. This encourages unitsâ activations to be varied, and while similar in spirit to the Lb term, this term explicitly discourages learning a uniform distribution. | 1511.06297#10 | 1511.06297#12 | 1511.06297 | [
"1502.01852"
] |
1511.06297#12 | Conditional Computation in Neural Networks for faster models | 2 1 my 1 mp 2 Ly = â SO vari fois} & -S â ¢ («. _ (= Yu) (7) ij 6 3.3 ALGORITHM We interleave the learning of the network parameters Î N N and the learning of the policy parameters Î Ï . We ï¬ rst update the network and policy parameters to minimize the following regularized loss function via backpropagation (Rumelhart et al., 1988): L=â log P(Y |X, Onn) + As(Lo + Le) + Av(Lv) + Ax2||Oww ||? + Ax2||Oxl| where A, can be understood as a trade-off parameter between prediction accuracy and parsimony of computation (obtained through sparse node activation), and A, as a trade-off parameter between a stochastic policy and a more input dependent saturated policy. We then minimize the cost function C with a REINFORCE-style approach to update the policy parameters (Williams, 1992): | 1511.06297#11 | 1511.06297#13 | 1511.06297 | [
"1502.01852"
] |
1511.06297#13 | Conditional Computation in Neural Networks for faster models | C = â log P (Y | X, Î N N ) As previously mentioned, we use minibatch stochastic gradient descent as well as minibatch policy gradient updates. A detailed algorithm is available in appendix A. 3 # Under review as a conference paper at ICLR 2016 3.4 BLOCK ACTIVATION POLICY To achieve computational gain, instead of activating single units in hidden layers, we activate con- tiguous (equally-sized) groups of units together (independently for each example in the minibatch), thus reducing the action space as well as the number of probabilities to compute and sample. As such, there are two potential speedups. First, the policy is much smaller and faster to compute. Second, it offers a computational advantage in the computation of the hidden layer themselves, since we are now performing a matrix multiplication of the following form: | 1511.06297#12 | 1511.06297#14 | 1511.06297 | [
"1502.01852"
] |
1511.06297#14 | Conditional Computation in Neural Networks for faster models | ((H â MH )W ) â MO where MH and MO are binary mask matrices. MO is obtained for each layer from the sampling of the policy as described in eq. 1: each sampled action (0 or 1) is repeated so as to span the corresponding block. MH is simply the mask of the previous layer. MH and MO resemble this (here there are 3 blocks of size 2): 0 1 0 0 1 0 1 0 ... 1 1 0 1 0 0 1 0 0 1 This allows us to quickly perform matrix multiplication by only considering the non-zero output elements as well as the non-zero elements in H â MH . # 4 EXPERIMENTS 4.1 MODEL IMPLEMENTATION The proposed model was implemented within Theano (Bergstra et al., 2010), a standard library for deep learning and neural networks. In addition to using optimizations offered by Theano, we also implemented specialized matrix multiplication code for the operation exposed in section 3.4. A straightforward and fairly naive CPU implementation of this operation yielded speedups of up to 5-10x, while an equally naive GPU implementation yielded speedups of up to 2-4x, both for sparsity rates of under 20% and acceptable matrix and block sizes.1 We otherwise use fairly standard methods for our neural network. The weight matrices are initialized using the heuristic of Glorot & Bengio (2010). We use a constant learning rate throughout minibatch SGD. We also use early stopping (Bishop, 2006) to avoid overï¬ | 1511.06297#13 | 1511.06297#15 | 1511.06297 | [
"1502.01852"
] |
1511.06297#15 | Conditional Computation in Neural Networks for faster models | tting. We only use fully-connected layers with tanh activations (reLu activations offer similar performance). 4.2 MODEL EVALUATION We ï¬ rst evaluate the performance of our model on the MNIST digit dataset. We use a single hidden layer of 16 blocks of 16 units (256 units total), with a target sparsity rate of Ï = 6.25% = 1/16, learning rates of 10â 3 for the neural network and 5 à 10â 5 for the policy, λv = λs = 200 and λL2 = 0.005. Under these conditions, a test error of around 2.3% was achieved. A normal neural network with the same number of hidden units achieves a test error of around 1.9%, while a normal neural network with a similar amount of computation (multiply-adds) being made (32 hidden units) achieves a test error of around 2.8%. Looking at the activation of the policy (1c), we see that it tends towards what was hypothesized in section 3.2, i.e. where examples activate most units with low probability and some units with high probability. We can also observe that the policy is input-dependent in ï¬ gures 1a and 1b, since we see different activation patterns for inputs of class â 0â and inputs of class â 1â . | 1511.06297#14 | 1511.06297#16 | 1511.06297 | [
"1502.01852"
] |
1511.06297#16 | Conditional Computation in Neural Networks for faster models | Since the computation performed in our model is sparse, one could hope that it achieves this perfor- mance with less computation time, yet we consistently observe that models that deal with MNIST are too small to allow our specialized (3.4) sparse implementation to make a substantial difference. We include this result to highlight conditions under which it is less desirable to use our model. 1Implementations used in this paper are available at http://github.com/bengioe/condnet/ 4 # Under review as a conference paper at ICLR 2016 (a) (b) (c) (d) Figure 1: MNIST, (a,b,c), probability distribution of the policy, each exampleâ s probability (y axis) of activating each unit (x axis) is plotted as a transparent red dot. Redder regions represent more examples falling in the probability region. Plot (a) is for class â 0â , (b) for class â 1â , (c) for all classes. (d), weight matrix of the policy. model condnet condnet condnet bdNN bdNN NN NN NN test error 0.511 0.514 0.497 0.629 0.590 0.560 0.546 0.497 Ï 1/24 1/16 1/16 0.17 0.2 - - - #blocks 24,24 16,32 10,10 10,10 10,10 64,64 128,128 480,480 block size 64 16 64 64 64 1 1 1 test time 6.8s(26.2s) 1.4s (8.2s) 2.0s(10.4s) 1.93s(10.3s) 2.8s(10.3s) 1.23s 2.31s 8.34s speedup 3.8Ã 5.7Ã 5.3Ã 5.3Ã 3.5Ã - - - Figure 2: CIFAR-10, condnet: our approach, NN: Neural Network without the conditional activa- tions, bdNN, block dropout Neural Network using a uniform policy. â | 1511.06297#15 | 1511.06297#17 | 1511.06297 | [
"1502.01852"
] |
1511.06297#17 | Conditional Computation in Neural Networks for faster models | speedupâ is how many times faster the forward pass is when using a specialized implementation (3.4). â test timeâ is the time required to do a full pass over the test dataset using the implementation, on a CPU, running on a single core; in parenthesis is the time without the optimization. Next, we consider the performance of our model on the CIFAR-10 (Krizhevsky & Hinton, 2009) image dataset. A brief hyperparameter search was made, and a few of the best models are shown in ï¬ | 1511.06297#16 | 1511.06297#18 | 1511.06297 | [
"1502.01852"
] |
1511.06297#18 | Conditional Computation in Neural Networks for faster models | gure 2. These results show that it is possible to achieve similar performance with our model (de- noted condnet) as with a normal neural network (denoted NN), yet using sensibly reduced computa- tion time. A few things are worth noting; we can set Ï to be lower than 1 over the number of blocks, since the model learns a policy that is actually not as sparse as Ï , mostly because REINFORCE pulls the policy towards higher probabilities on average. For example our best performing model has a target of 1/16 but learns policies that average an 18% sparsity rate (we used λv = λs = 20, except for the ï¬ rst layer λv = 40, we used λL2 = 0.01, and the learning rates were 0.001 for the neural net, 10â 5 and 5 à 10â 4 for the ï¬ rst and second policy layers respectively). | 1511.06297#17 | 1511.06297#19 | 1511.06297 | [
"1502.01852"
] |
1511.06297#19 | Conditional Computation in Neural Networks for faster models | The neural networks without conditional activations are trained with L2 regularization as well as regular unit-wise dropout. We also train networks with the same architecture as our models, using blocks, but with a uniform policy (as in original dropout) instead of a learned conditional one. This model (denoted bdNN) does not perform as well as our model, showing that the dropout noise by itself is not sufï¬ cient, and that learning a policy is required to fully take beneï¬ t of this architecture. | 1511.06297#18 | 1511.06297#20 | 1511.06297 | [
"1502.01852"
] |
1511.06297#20 | Conditional Computation in Neural Networks for faster models | 5 # Under review as a conference paper at ICLR 2016 0.40 T T e * ° a NN 0.35 e 4 > ee 4 A be e@ = condnet: e 0.30 0.25 0.20 valid error (%) 0.15 0.10 0.05 time of validation (sec) Figure 3: SVHN, each point is an experiment. The x axis is the time required to do a full pass over the valid dataset (log scale, lower is better). Note that we plot the full hyperparameter exploration results, which is why condnet results are so varied. model condnet condnet condnet NN NN NN test error 0.183 0.139 0.073 0.116 0.100 0.091 Ï 1/11 1/25,1/7 1/22 - - - #blocks 13,8 27,7 25,22 288,928 800,736 1280,1056 block size 16 16 32 1 1 1 test time 1.5s(2.2s) 2.8s (4.3s) 10.2s(14.1s) 4.8s 10.7s 16.8s speedup 1.4à 1.6à 1.4à - - - Figure 4: SVHN results (see ï¬ g 2) Finally we tested our model on the Street View House Numbers (SVHN) (Netzer et al., 2011) dataset, which also yielded encouraging results (ï¬ gure 3). As we restrain the capacity of the models (by increasing sparsity or decreasing number of units), condnets retain acceptable performance with low run times, while plain neural networks suffer highly (their performance dramatically decreases with lower run times). The best condnet model has a test error of 7.3%, and runs a validation epoch in 10s (14s without speed optimization), while the best standard neural network model has a test error of 9.1%, and runs in 16s. Note that the variance in the SVHN results (ï¬ gure 3) is due to the mostly random hyperparameter exploration, where block size, number of blocks, Ï | 1511.06297#19 | 1511.06297#21 | 1511.06297 | [
"1502.01852"
] |
1511.06297#21 | Conditional Computation in Neural Networks for faster models | , λv, λs, as well of learning rates are randomly picked. The normal neural network results were obtained by varying the number of hidden units of a 2-hidden-layer model. For all three datasets and all condnet models used, the required training time was higher, but still reasonable. On average experiments took 1.5 to 3 times longer (wall time). # 4.3 EFFECTS OF REGULARIZATION The added regularization proposed in section 3.2 seems to play an important role in our ability to train the conditional model. When using only the prediction score, we observed that the algorithm tried to compensate by recruiting more units and saturating their participation probability, or even failed by dismissing very early what were probably considered bad units. In practice, the variance regularization term Lv only slightly affects the prediction accuracy and learned policies of models, but we have observed that it signiï¬ cantly speeds up the training process, probably by encouraging policies to become less uniform earlier in the learning process. | 1511.06297#20 | 1511.06297#22 | 1511.06297 | [
"1502.01852"
] |
1511.06297#22 | Conditional Computation in Neural Networks for faster models | This can 6 # Under review as a conference paper at ICLR 2016 (a) (b) Figure 5: CIFAR-10, (a) each pair of circle and triangle is an experiment made with a given lambda (x axis), resulting in a model with a certain error and running time (y axes). As λs increases the running time decreases, but so does performance. (b) The same model is being trained with different values of λv. | 1511.06297#21 | 1511.06297#23 | 1511.06297 | [
"1502.01852"
] |
1511.06297#23 | Conditional Computation in Neural Networks for faster models | Redder means lower, greener means higher. be seen in ï¬ gure 5b, where we train a model with different values of λv. When λv is increased, the ï¬ rst few epochs have a much lower error rate. It is possible to tune some hyperparameters to affect the point at which the trade-off between com- putation speed and performance lies, thus one could push the error downwards at the expense of also more computation time. This is suggested by ï¬ gure 5a, which shows the effect of one such hyperparameter (λs) on both running times and performance for the CIFAR dataset. Here it seems that λ â ¼ [300, 400] offers the best trade-off, yet other values could be selected, depending on the speciï¬ c requirements of an application. # 5 RELATED WORK | 1511.06297#22 | 1511.06297#24 | 1511.06297 | [
"1502.01852"
] |
1511.06297#24 | Conditional Computation in Neural Networks for faster models | Ba & Frey (2013) proposed a learning algorithm called standout for computing an input-dependent dropout distribution at every node. As opposed to our layer-wise method, standout computes a one- shot dropout mask over the entire network, conditioned on the input to the network. Additionally, masks are unit-wise, while our approach uses masks that span blocks of units. Bengio et al. (2013) introduced Stochastic Times Smooth neurons as gaters for conditional computation within a deep neural network. STS neurons are highly non-linear and non-differentiable functions learned using estimators of the gradient obtained through REINFORCE. They allow a sparse binary gater to be computed as a function of the input, thus reducing computations in the then sparse activation of hidden layers. Stollenga et al. (2014) recently proposed to learn a sequential decision process over the ï¬ lters of a convolutional neural network (CNN). As in our work, a direct policy search method was chosen to ï¬ nd the parameters of a control policy. Their problem formulation differs from ours mainly in the notion of decision â stageâ . In their model, an input is ï¬ rst fed through a network, the activations are computed during forward propagation then they are served to the next decision stage. The goal of the policy is to select relevant ï¬ lters from the previous stage so as to improve the decision accuracy on the current example. They also use a gradient-free evolutionary algorithm, in contrast to our gradient-based method. The Deep Sequential Neural Network (DSNN) model of Denoyer & Gallinari (2014) is possibly closest to our approach. The control process is carried over the layers of the network and uses the output of the previous layer to compute actions. The REINFORCE algorithm is used to train the pol- icy with the reward/cost function being deï¬ ned as the loss at the output in the base network. DSNN considers the general problem of choosing between between different type of mappings (weights) in | 1511.06297#23 | 1511.06297#25 | 1511.06297 | [
"1502.01852"
] |
1511.06297#25 | Conditional Computation in Neural Networks for faster models | 7 # Under review as a conference paper at ICLR 2016 a composition of functions. However, they test their model on datasets in which different modes are proeminent, making it easy for a policy to distinguish between them. Another point of comparison for our work are attention models (Mnih et al., 2014; Gregor et al., 2015; Xu et al., 2015). These models typically learn a policy, or a form of policy, that allows them to selectively attend to parts of their input sequentially, in a visual 2D environnement. Both attention and our approach aim to reduce computation times. While attention aims to perform dense compu- tations on subsets of the inputs, our approach aims to be more general, since the policy focuses on subsets of the whole computation (it is in a sense more distributed). It should also be possible to combine these approaches, since one acts on the input space and the other acts on the representa- tion space, altough the resulting policies would be much more complex, and not necessarily easily trainable. | 1511.06297#24 | 1511.06297#26 | 1511.06297 | [
"1502.01852"
] |
1511.06297#26 | Conditional Computation in Neural Networks for faster models | # 6 CONCLUSION This paper presents a method for tackling the problem of conditional computation in deep networks by using reinforcement learning. We propose a type of parameterized conditional computation pol- icy that maps the activations of a layer to a Bernoulli mask. The reinforcement signal accounts for the loss function of the network in its prediction task, while the policy network itself is regularized to account for the desire to have sparse computations. The REINFORCE algorithm is used to train policies to optimize this cost. Our experiments show that it is possible to train such models at the same levels of accuracy as their standard counterparts. Additionally, it seems possible to execute these similarly accurate models faster due to their sparsity. Furthermore, the model has a few simple parameters that allow to control the trade-off between accuracy and running time. The use of REINFORCE could be replaced by a more efï¬ cient policy search algorithm, and also, perhaps, one in which rewards (or costs) as described above are replaced by a more sequential variant. The more direct use of computation time as a cost may prove beneï¬ | 1511.06297#25 | 1511.06297#27 | 1511.06297 | [
"1502.01852"
] |
1511.06297#27 | Conditional Computation in Neural Networks for faster models | cial. In general, we consider conditional computation to be an area in which reinforcement learning could be very useful, and deserves further study. All the running times reported in the Experiments section are for a CPU, running on a single core. The motivation for this is to explore deployment of large neural networks on cheap, low-power, single core CPUs such as phones, while retaining high model capacity and expressiveness. While the results presented here show that our model for conditional computation can achieve speedups in this context, it is worth also investigating adaptation of these sparse computation models in multi- core/GPU architectures; this is the subject of ongoing work. | 1511.06297#26 | 1511.06297#28 | 1511.06297 | [
"1502.01852"
] |
1511.06297#28 | Conditional Computation in Neural Networks for faster models | # ACKNOWLEDGEMENTS The authors gratefully acknowledge ï¬ nancial support for this work by the Samsung Advanced In- stitute of Technology (SAIT), the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Fonds de recherche du Qu´ebec - Nature et Technologies (FQRNT). # REFERENCES In Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K.Q. (eds.), Ad- vances in Neural Information Processing Systems 26, pp. 3084â 3092. Curran Associates, Inc., 2013. URL http://papers.nips.cc/paper/5032-adaptive-dropout-for- training-deep-neural-networks.pdf. Bengio, Y., Simard, P., and Frasconi, P. Learning long-term dependencies with gradient descent is difï¬ | 1511.06297#27 | 1511.06297#29 | 1511.06297 | [
"1502.01852"
] |
1511.06297#29 | Conditional Computation in Neural Networks for faster models | cult. IEEE Transactions on Neural Nets, pp. 157â 166, 1994. Bengio, Yoshua, L´eonard, Nicholas, and Courville, Aaron. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. Bergstra, James, Breuleux, Olivier, Bastien, Fr´ed´eric, Lamblin, Pascal, Pascanu, Razvan, Des- jardins, Guillaume, Turian, Joseph, Warde-Farley, David, and Bengio, Yoshua. Theano: a CPU | 1511.06297#28 | 1511.06297#30 | 1511.06297 | [
"1502.01852"
] |
1511.06297#30 | Conditional Computation in Neural Networks for faster models | 8 # Under review as a conference paper at ICLR 2016 and GPU math expression compiler. In Proceedings of the Python for Scientiï¬ c Computing Con- ference (SciPy), June 2010. Oral Presentation. Bishop, Christopher M. Pattern Recognition and Machine Learning (Information Science and Statis- tics). Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006. ISBN 0387310738. Davis, Andrew and Arel, Itamar. | 1511.06297#29 | 1511.06297#31 | 1511.06297 | [
"1502.01852"
] |
1511.06297#31 | Conditional Computation in Neural Networks for faster models | Low-rank approximations for conditional feedforward computation in deep neural networks. arXiv preprint arXiv:1312.4461, 2013. Deisenroth, Marc Peter, Neumann, Gerhard, and Peters, Jan. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1â 142, 2013. doi: 10.1561/2300000021. URL http://dx.doi.org/10.1561/2300000021. Denoyer, Ludovic and Gallinari, Patrick. | 1511.06297#30 | 1511.06297#32 | 1511.06297 | [
"1502.01852"
] |
1511.06297#32 | Conditional Computation in Neural Networks for faster models | Deep sequential neural network. CoRR, abs/1410.0510, 2014. URL http://arxiv.org/abs/1410.0510. Glorot, Xavier and Bengio, Yoshua. Understanding the difï¬ culty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artiï¬ cial Intelligence and Statistics, AISTATS 2010, Chia Laguna Resort, Sardinia, Italy, May 13-15, 2010, pp. 249â 256, 2010. URL http://www.jmlr.org/proceedings/papers/v9/glorot10a.html. Gregor, Karol, Danihelka, Ivo, Graves, Alex, and Wierstra, Daan. | 1511.06297#31 | 1511.06297#33 | 1511.06297 | [
"1502.01852"
] |
1511.06297#33 | Conditional Computation in Neural Networks for faster models | Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ ers: Sur- passing human-level performance on imagenet classiï¬ cation. arXiv preprint arXiv:1502.01852, 2015. Hinton, Geoffrey E., Srivastava, Nitish, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Improving neural networks by preventing co-adaptation of feature detectors. CoRR, Ruslan. abs/1207.0580, 2012. URL http://arxiv.org/abs/1207.0580. | 1511.06297#32 | 1511.06297#34 | 1511.06297 | [
"1502.01852"
] |
1511.06297#34 | Conditional Computation in Neural Networks for faster models | Hochreiter, S. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, T.U. M¨unich, 1991. Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images, 2009. Martens, James. Deep learning via hessian-free optimization. In Proceedings of the 27th Interna- tional Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pp. 735â 742, 2010. URL http://www.icml2010.org/papers/458.pdf. | 1511.06297#33 | 1511.06297#35 | 1511.06297 | [
"1502.01852"
] |
1511.06297#35 | Conditional Computation in Neural Networks for faster models | Mnih, Volodymyr, Heess, Nicolas, Graves, Alex, and kavukcuoglu, koray. Recurrent models of vi- sual attention. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., and Weinberger, K.Q. (eds.), Advances in Neural Information Processing Systems 27, pp. 2204â 2212. Curran Asso- ciates, Inc., 2014. URL http://papers.nips.cc/paper/5542-recurrent-models- of-visual-attention.pdf. Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Read- ing digits in natural images with unsupervised feature learning. In NIPS workshop on deep learn- ing and unsupervised feature learning, volume 2011, pp. 5. | 1511.06297#34 | 1511.06297#36 | 1511.06297 | [
"1502.01852"
] |
1511.06297#36 | Conditional Computation in Neural Networks for faster models | Granada, Spain, 2011. Pearlmutter, Barak A. Fast exact multiplication by the hessian. Neural Comput., 6(1):147â doi: 10.1162/neco.1994.6.1.147. URL http: 160, January 1994. //dx.doi.org/10.1162/neco.1994.6.1.147. ISSN 0899-7667. Puterman, Martin L. Markov Decision Processes: Discrete Stochastic Dynamic Programming. | 1511.06297#35 | 1511.06297#37 | 1511.06297 | [
"1502.01852"
] |
1511.06297#37 | Conditional Computation in Neural Networks for faster models | John Wiley & Sons, Inc., New York, NY, USA, 1st edition, 1994. ISBN 0471619779. Rumelhart, David E, Hinton, Geoffrey E, and Williams, Ronald J. Learning representations by back-propagating errors. Cognitive modeling, 5, 1988. Silver, David, Lever, Guy, Heess, Nicolas, Degris, Thomas, Wierstra, Daan, and Riedmiller, Martin. Deterministic policy gradient algorithms. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pp. 387â 395, 2014. URL http://jmlr.org/proceedings/papers/v32/silver14.html. | 1511.06297#36 | 1511.06297#38 | 1511.06297 | [
"1502.01852"
] |
1511.06297#38 | Conditional Computation in Neural Networks for faster models | 9 # Under review as a conference paper at ICLR 2016 Stollenga, Marijn F, Masci, Jonathan, Gomez, Faustino, and Schmidhuber, J¨urgen. Deep networks with internal selective attention through feedback connections. In Ghahra- mani, Z., Welling, M., Cortes, C., Lawrence, N.D., and Weinberger, K.Q. (eds.), Ad- vances in Neural Information Processing Systems 27, pp. 3545â | 1511.06297#37 | 1511.06297#39 | 1511.06297 | [
"1502.01852"
] |
1511.06297#39 | Conditional Computation in Neural Networks for faster models | 3553. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5276-deep-networks-with- internal-selective-attention-through-feedback-connections.pdf. Williams, Ronald J. Simple statistical gradient-following algorithms for connectionist rein- doi: forcement learning. Machine Learning, 8(3-4):229â 256, 1992. 10.1007/BF00992696. URL http://dx.doi.org/10.1007/BF00992696. ISSN 0885-6125. | 1511.06297#38 | 1511.06297#40 | 1511.06297 | [
"1502.01852"
] |
1511.06297#40 | Conditional Computation in Neural Networks for faster models | Xu, Kelvin, Ba, Jimmy, Kiros, Ryan, Courville, Aaron, Salakhutdinov, Ruslan, Zemel, Richard, and Bengio, Yoshua. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2015. 10 # Under review as a conference paper at ICLR 2016 # A ALGORITHM The forward pass in our model is done as described in the algorithm below (1), both at train time and test time. input: x 1 h0 â x 2 u0 â 1 ; 3 for each hidden layer l â 1, ..., L do 4 pl â sigm(Z(l)hlâ 1 + d(l)) = Ï l(ul|sl = hlâ 1) ul â ¼ Ber(pl) ; if blocksize > 1 then 5 | 1511.06297#39 | 1511.06297#41 | 1511.06297 | [
"1502.01852"
] |
1511.06297#41 | Conditional Computation in Neural Networks for faster models | // the input mask is ones // sample Bernoulli from probablities pl extend ul by repeating each value blocksize times | # i end // this operation can be performed efficiently as described in section 3.4: hy & f (WO (hy-1 @ w-1) + bY) @ uy 8 # 9 10 end # Algorithm 1: Single-input forward pass This algorithm can easily be extended to the minibatch setting by replacing vector operations by matrix operations. | 1511.06297#40 | 1511.06297#42 | 1511.06297 | [
"1502.01852"
] |
1511.06297#42 | Conditional Computation in Neural Networks for faster models | Note that in the case of classiï¬ cation, the last layer is a softmax layer and is not multiplied by a mask. # input: x 1 y = forward(x) ; // c+ C(x) = â log P(Â¥|x) Lect rs(Ly + Le) + Av(Lv) + Ar2||Onn ||? + Ax2||Oxll? 3 // given the output of the forward pass // as in sections 3.2 and 3.3 // update the neural network weights: | 1511.06297#41 | 1511.06297#43 | 1511.06297 | [
"1502.01852"
] |
1511.06297#43 | Conditional Computation in Neural Networks for faster models | 4 Î N N â Î N N â αâ Î N N L // update the policy weights: for each hidden layer | â ¬ 1,...,L do 0 â 0 â ax cVo, logp, â aVo,L Se REINFORCE # â αâ θl L ; // where pl is computed as in algorithm 1 # 7 end Algorithm 2: Single-input backward pass Note that in line 4, some gradients are zeroes, for example the gradient of the L2 regularisation of Î Ï with respect to Î N N is zero. Similarly in line 5, the gradient of c with respect to Î Ï is zero, which is why we have to use REINFORCE to approximate a gradient in the direction that minimizes c. This algorithm can be extended to the minibatch setting efï¬ ciently by replacing the gradient compu- tations in line 7 with the use of the so called R-op, as described in section 3.1, and other computations as is usually done in the minibatch setting with matrix operations. # B REINFORCE REINFORCE (Williams, 1992), also known as the likelihood-ratio method, is a policy search algo- rithm. It aims to use gradient methods to improve a given parameterized policy. In reinforcement learning, a sequence of state-action-reward tuples is described as a trajectory Ï | 1511.06297#42 | 1511.06297#44 | 1511.06297 | [
"1502.01852"
] |
1511.06297#44 | Conditional Computation in Neural Networks for faster models | . The objective function of a parameterized policy Ï Î¸ for the cumulative return of a trajectory Ï is described as: t=1 J(0) =E% {Eris = wo} 11 # Under review as a conference paper at ICLR 2016 where s0 is the initial state of the trajectory. Let R(Ï ) denote the return for trajectory Ï . The gradient of the objective with respect to the parameters of the policy is: â θJ(θ) = â θEÏ Î¸ VoJ(0) = VoExâ {R(r)} =Vo | P{r|O}R(r)dr = [ vole tro} Rr) ar (8) Note that the interchange in (8) is only valid under some assumptions (see Silver et al. (2014)). VoJ(0) = | Vo [P{7|9} R(r)] dr = | [R(7)VoP{r|} + VoR(r)P{r]|9}] dr (9) R(r) t T T T [layrercin + VeR(r)| P{r|0}d = E® {R(r)Vo log P{r|0} + VoR(7)} (10) The product rule of derivatives is used in (9), and the derivative of a log in (10). Since R(Ï ) does not depend on θ directly, the gradient â θR(Ï ) is zero. We end up with this gradient: â θJ(θ) = EÏ Î¸ Ï {R(Ï )â θ log P{Ï |θ}} (11) Without knowing the transition probabilities, we cannot compute the probability of our trajectories P{Ï |θ}, or their gradient. Fortunately we are in a MDP setting, and we can make use of the Markov property of the trajectories to compute the gradient: | 1511.06297#43 | 1511.06297#45 | 1511.06297 | [
"1502.01852"
] |
1511.06297#45 | Conditional Computation in Neural Networks for faster models | T Vo log P{r|0} = Vo log | p(so) | [ Pfse+ilse, a }0(az|s2) t=1 T = Vo log (so) + )> Vo log P{si41|8, a0} + Vo log mo(ai|s:) (12) t=1 ] Ma Vo log 79 (aus) 1 In (12), p(s0) does not depend on θ, so the gradient is zero. Similarly, P{st+1|st, at} does not depend on θ (not directly at least), so the gradient is also zero. We end up with the gradient of the log policy, which is easy to compute. In our particular case, the trajectories only have a single step and the reward of the trajectory is the neural network cost C(x), thus the summation dissapears and the gradient found in (2) is found by taking the log of the probability of our Bernoulli sample: â θl C(x) = E {C(x)â θl log Ï Î¸l (u|s)} Vo,C(x) = E{C(x)Vo, log 7, (uls)} =E {ever oe] ov(1- nih i=l =E {evs Slog [oui + (1â oa) - wih i=l 12 | 1511.06297#44 | 1511.06297 | [
"1502.01852"
] |
|
1511.06279#0 | Neural Programmer-Interpreters | 6 1 0 2 b e F 9 2 ] G L . s c [ 4 v 9 7 2 6 0 . 1 1 5 1 : v i X r a Published as a conference paper at ICLR 2016 # NEURAL PROGRAMMER-INTERPRETERS Scott Reed & Nando de Freitas Google DeepMind London, UK [email protected] [email protected] # ABSTRACT We propose the neural programmer-interpreter (NPI): a recurrent and composi- tional neural network that learns to represent and execute programs. NPI has three learnable components: a task-agnostic recurrent core, a persistent key-value pro- gram memory, and domain-speciï¬ c encoders that enable a single NPI to operate in multiple perceptually diverse environments with distinct affordances. By learning to compose lower-level programs to express higher-level programs, NPI reduces sample complexity and increases generalization ability compared to sequence-to- sequence LSTMs. The program memory allows efï¬ cient learning of additional tasks by building on existing programs. NPI can also harness the environment (e.g. a scratch pad with read-write pointers) to cache intermediate results of com- putation, lessening the long-term memory burden on recurrent hidden units. In this work we train the NPI with fully-supervised execution traces; each program has example sequences of calls to the immediate subprograms conditioned on the input. Rather than training on a huge number of relatively weak labels, NPI learns from a small number of rich examples. We demonstrate the capability of our model to learn several types of compositional programs: addition, sorting, and canonicalizing 3D models. Furthermore, a single NPI learns to execute these pro- grams and all 21 associated subprograms. # INTRODUCTION Teaching machines to learn new programs, to rapidly compose new programs from existing pro- grams, and to conditionally execute these programs automatically so as to solve a wide variety of tasks is one of the central challenges of AI. Programs appear in many guises in various AI prob- lems; including motor behaviours, image transformations, reinforcement learning policies, classical algorithms, and symbolic relations. In this paper, we develop a compositional architecture that learns to represent and interpret pro- grams. We refer to this architecture as the Neural Programmer-Interpreter (NPI). | 1511.06279#1 | 1511.06279 | [
"1511.04834"
] |
|
1511.06279#1 | Neural Programmer-Interpreters | The core module is an LSTM-based sequence model that takes as input a learnable program embedding, program arguments passed on by the calling program, and a feature representation of the environment. The output of the core module is a key indicating what program to call next, arguments for the following program and a ï¬ ag indicating whether the program should terminate. In addition to the recurrent core, the NPI architecture includes a learnable key-value memory of program embeddings. This program-memory is essential for learning and re-using programs in a continual manner. Figures 1 and 2 illustrate the NPI on two different tasks. We show in our experiments that the NPI architecture can learn 21 programs, including addition, sorting, and trajectory planning from image pixels. Crucially, this can be achieved using a single core model with the same parameters shared across all tasks. Different environments (for example images, text, and scratch-pads) may require speciï¬ c perception modules or encoders to produce the features used by the shared core, as well as environment-speciï¬ c actuators. Both perception modules and actuators can be learned from data when training the NPI architecture. To train the NPI we use curriculum learning and supervision via example execution traces. Each program has example sequences of calls to the immediate subprograms conditioned on the input. | 1511.06279#0 | 1511.06279#2 | 1511.06279 | [
"1511.04834"
] |
1511.06279#2 | Neural Programmer-Interpreters | 1 Published as a conference paper at ICLR 2016 HGOTO [fe ACT 12 | 12 . 72] ine 12 ne fla iz +2] 72] 12) * [ap GOTO() HGOTO() LGOTO() ACT(LEFT) LGOTO() ACT(LEFT) GOTO() | VGOTO() DGOTO() ACT(DOWN) end state Figure 1: Example execution of canonicalizing 3D car models. The task is to move the camera such that a target angle and elevation are reached. There is a read-only scratch pad containing the target (angle 1, elevation 2 here). | 1511.06279#1 | 1511.06279#3 | 1511.06279 | [
"1511.04834"
] |
1511.06279#3 | Neural Programmer-Interpreters | The image encoder is a convnet trained from scratch on pixels. Ae SSS Pi y | ! u + 2 2 AOD) ACT@ZWRITE) ADDI) â â CARRY)~â «ACT@LEFT) â caRRY) AcT@nwere) Figure 2: Example execu- tion trace of single-digit addi- tion. The task is to perform a single-digit add on the num- bers at pointer locations in the ï¬ | 1511.06279#2 | 1511.06279#4 | 1511.06279 | [
"1511.04834"
] |
1511.06279#4 | Neural Programmer-Interpreters | rst two rows. The carry (row 3) and output (row 4) should be updated to reï¬ ect the addi- tion. At each time step, an ob- servation of the environment (viewed from each pointer on a scratch pad) is encoded into a ï¬ xed-length vector. By using neural networks to represent the subprograms and learning these from data, the approach can generalize on tasks involving rich perceptual inputs and uncertainty. We may envision two approaches to provide supervision. In one, we provide a very large number of labeled examples, as in object recognition, speech and machine translation. In the other, the approached followed in this paper, the aim is to provide far fewer labeled examples, but where the labels contain richer information allowing the model to learn compositional structure. While unsupervised and reinforcement learning play important roles in perception and motor control, other cognitive abilities are possible thanks to rich supervision and curriculum learning. This is indeed the reason for sending our children to school. An advantage of our approach to model building and training is that the learned programs exhibit strong generalization. | 1511.06279#3 | 1511.06279#5 | 1511.06279 | [
"1511.04834"
] |
1511.06279#5 | Neural Programmer-Interpreters | Speciï¬ cally, when trained to sort sequences of up to twenty numbers in length, they can sort much longer sequences at test time. In contrast, the experiments will show that more standard sequence to sequence LSTMs only exhibit weak generalization, see Figure 6. A trained NPI with ï¬ xed parameters and a learned library of programs, can act both as an interpreter and as a programmer. As an interpreter, it takes input in the form of a program embedding and input data and subsequently executes the program. As a programmer, it uses samples drawn from a new task to generate a new program embedding that can be added to its library of programs. | 1511.06279#4 | 1511.06279#6 | 1511.06279 | [
"1511.04834"
] |
1511.06279#6 | Neural Programmer-Interpreters | # 2 RELATED WORK Several ideas related to our approach have a long history. For example, the idea of using dynam- ically programmable networks in which the activations of one network become the weights (the 2 Published as a conference paper at ICLR 2016 program) of a second network was mentioned in the Sigma-Pi units section of the inï¬ uential PDP paper (Rumelhart et al., 1986). This idea appeared in (Sutskever & Hinton, 2009) in the context of learning higher order symbolic relations and in (Donnarumma et al., 2015) as the key ingredient of an architecture for prefrontal cognitive control. Schmidhuber (1992) proposed a related meta-learning idea, whereby one learns the parameters of a slowly changing network, which in turn generates context dependent weight changes for a second rapidly changing network. These approaches have only been demonstrated in very limited settings. In cognitive science, several theories of brain areas controlling other brain parts so as to carry out multiple tasks have been proposed; see for example Schneider & Chein (2003); Anderson (2010) and Donnarumma et al. (2012). Related problems have been studied in the literature on hierarchical reinforcement learning (e.g., Dietterich (2000); Andre & Russell (2001); Sutton et al. (1999) and Schaul et al. (2015)), imitation and apprenticeship learning (e.g., Kolter et al. (2008) and Rothkopf & Ballard (2013)) and elicita- tion of options through human interaction (Subramanian et al., 2011). | 1511.06279#5 | 1511.06279#7 | 1511.06279 | [
"1511.04834"
] |
1511.06279#7 | Neural Programmer-Interpreters | These ideas have held great promise, but have not enjoyed signiï¬ cant impact. We believe the recurrent compositional neural representations proposed in this paper could help these approaches in the future, and in particular in overcoming feature engineering. Several recent advancements have extended recurrent networks to solve problems beyond simple sequence prediction. Graves et al. (2014) developed a neural Turing machine capable of learning and executing simple programs such as repeat copying, simple priority sorting and associative recall. Vinyals et al. (2015) developed Pointer Networks that generalize the notion of encoder attention in order to provide the decoder a variable-sized output space depending on the input sequence length. This model was shown to be effective for combinatorial optimization problems such as the traveling salesman and Delaunay triangulation. While our proposed model is trained on execution traces in- stead of input and output pairs, in exchange for this richer supervision we beneï¬ t from compositional program structure, improving data efï¬ ciency on several problems. This work is also closely related to program induction. Most previous work on program induc- tion, i.e. inducing a program given example input and output pairs, has used genetic program- ming (Banzhaf et al., 1998) to evolve useful programs from candidate populations. Mou et al. (2014) process program symbols to learn max-margin program embeddings with the help of parse trees. Zaremba & Sutskever (2014) trained LSTM models to read in the text of simple programs Joulin & Mikolov (2015) aug- character-by-character and correctly predict the program output. mented a recurrent network with a pushdown stack, allowing for generalization to longer input sequences than seen during training for several algorithmic patterns. Contemporary to this work, several papers have also studied program induction with variants of recurrent neural networks (Zaremba & Sutskever, 2015; Zaremba et al., 2015; Kaiser & Sutskever, 2015; Kurach et al., 2015; Neelakantan et al., 2015). While we share a similar motivation, our approach is distinct in that we explicitly incorporate compositional structure into the network using a program memory, allowing the model to learn new programs by combining sub-programs. | 1511.06279#6 | 1511.06279#8 | 1511.06279 | [
"1511.04834"
] |
1511.06279#8 | Neural Programmer-Interpreters | # 3 MODEL The NPI core is a long short-term memory (LSTM) network (Hochreiter & Schmidhuber, 1997) that acts as a router between programs conditioned on the current state observation and previous hidden unit states. At each time step, the core module can select another program to invoke using content-based addressing. It emits the probability of ending the current program with a single binary unit. If this probability is over threshold (we used 0.5), control is returned to the caller by popping the callerâ s LSTM hidden units and program embedding off of a program call stack and resuming execution in this context. The NPI may also optionally write arguments (ARG) that are passed by reference or value to the invoked sub-programs. For example, an argument could indicate a speciï¬ c location in the input sequence (by reference), or it could specify a number to write down at a particular location in the sequence (by value). The subsequent state consists of these arguments and observations of the environment. The approach is illustrated in Figures 1 and 2. It must be emphasized that there is a single inference core. That is, all the LSTM instantiations executing arbitrary programs share the same parameters. Different programs correspond to program embeddings, which are stored in a learnable persistent memory. The programs therefore have a more | 1511.06279#7 | 1511.06279#9 | 1511.06279 | [
"1511.04834"
] |
1511.06279#9 | Neural Programmer-Interpreters | 3 Published as a conference paper at ICLR 2016 succinct representation than neural programs encoded as the full set of weights in a neural network (Rumelhart et al., 1986; Graves et al., 2014). The output of an NPI, conditioned on an input state and a program to run, is a sequence of actions in a given environment. In this work, we consider several environments: a 1-D array with read-only pointers and a swap action, a 2-D scratch pad with read-write pointers, and a CAD renderer with controllable elevation and azimuth movements. Note that the sequence of actions for a program is not ï¬ xed, but dependent also on the input state. 3.1 INFERENCE Denote the environment observation at time t as et â E, and the current program arguments as at â A. The form of et can vary dramatically by environment; for example it could be a color image or an array of numbers. The program arguments at can also vary by environment, but in the experiments for this paper we always used a 3-tuple of integers (at(1), at(2), at(3)). Given the environment and arguments at time t, a ï¬ xed-length state encoding st â RD is extracted by a domain-speciï¬ c encoder fenc : E à A â RD. | 1511.06279#8 | 1511.06279#10 | 1511.06279 | [
"1511.04834"
] |
1511.06279#10 | Neural Programmer-Interpreters | In section 4 we provide examples of several encoders. Note that a single NPI network can have multiple encoders for multiple environments, and encoders can potentially also be shared across tasks. We denote the current program embedding as pt â RP . The previous hidden unit and cell states are h(l) tâ 1 â RM , l = 1, ..., L where L is the number of layers in the LSTM. The program and state vectors are then propagated forward through an LSTM mapping flstm as in (Sutskever et al., 2014). How to fuse pt and st within flstm is an implementation detail, but in this work we concatenate and feed through a 2-layer MLP with rectiï¬ ed linear (ReLU) hidden activation and linear decoder. From the top LSTM hidden state hL t , several decoders generate the outputs. | 1511.06279#9 | 1511.06279#11 | 1511.06279 | [
"1511.04834"
] |
1511.06279#11 | Neural Programmer-Interpreters | The probability of ï¬ nishing the program and returning to the caller 1 is computed by fend : RM â [0, 1]. The lookup key embedding used for retrieving the next program from memory is computed by fprog : RM â RK. Note that RK can be much smaller than RP because the key only need act as the identiï¬ er of a program, while the program embedding must have enough capacity to conditionally generate a sequence of actions. The contents of the arguments to the next program to be called are generated by farg : | 1511.06279#10 | 1511.06279#12 | 1511.06279 | [
"1511.04834"
] |
1511.06279#12 | Neural Programmer-Interpreters | RM â A. The feed-forward steps of program inference are summarized below: st = fenc(et, at) (1) ht = flstm(st, pt, htâ 1) (2) rt = fend(ht), kt = fprog(ht), at+1 = farg(ht) (3) where rt, kt and at+1 correspond to the end-of-program probability, program key embedding, and output arguments at time t, respectively. These yield input arguments at time t + 1. To simplify the notation, we have abstracted properties such as layers and cell memory in the sequence-to-sequence LSTM of equation (2); see (Sutskever et al., 2014) for details. The NPI representation is equipped with key-value memory structures M key â | 1511.06279#11 | 1511.06279#13 | 1511.06279 | [
"1511.04834"
] |
1511.06279#13 | Neural Programmer-Interpreters | RN à K and M prog â RN à P storing program keys and program embeddings, respectively, where N is the current number of programs in memory. We can add more programs by adding rows to memory. During training, the next program identiï¬ er is provided to the model as ground-truth, so that its embedding can be retrieved from the corresponding row of M prog. At test time, we compute the â program IDâ by comparing the key embedding kt to each row of M key storing all program keys. Then the program embedding is retrieved from M prog as follows: iâ = arg max i=1..N The next environmental state et+1 will be determined by the dynamics of the environment and can be affected by both the choice of program pt and the contents of the output arguments at, i.e. et+1 â ¼ fenv(et, pt, at) (5) The transition mapping fenv is domain-speciï¬ c and will be discussed in Section 4. A description of the inference procedure is given in Algorithm 1. 1In our implementation, a program may ï¬ rst call a subprogram before itself ï¬ | 1511.06279#12 | 1511.06279#14 | 1511.06279 | [
"1511.04834"
] |
1511.06279#14 | Neural Programmer-Interpreters | nishing. The only exception is the ACT program that signals a low-level action to the environment, e.g. moving a pointer one step left or writing a value. By convention ACT does not call any further sub-programs. 4 Published as a conference paper at ICLR 2016 Algorithm 1 Neural programming inference 1: Inputs: Environment observation e, program id i, arguments a, stop threshold a 2: function RUN(i, a) 3: heoO,r-0,p + MP9 > Init LSTM and return probability. 4: while r < ado 5: 8 © fenc(e, a), h © fistm(s, p, h) > Feed-forward NPI one step. 6: r& fend lh), k= fprog(h), a2 â farg(h) 7: in © arg max( Mj)" k > Decide the next program to run. j=1.N 8: if i == ACT then e + fenv(e,p, a) > Update the environment based on ACT. 9: else RUN(i2, a2) > Run subprogram 72 with arguments a Each task has a set of actions that affect the environment. For example, in addition there are LEFT and RIGHT actions that move a speciï¬ ed pointer, and a WRITE action which writes a value at a speciï¬ ed location. These actions are encapsulated into a general-purpose ACT program shared across tasks, and the concrete action to be taken is indicated by the NPI-generated arguments at. Note that the core LSTM module of our NPI representation is completely agnostic to the data modal- ity used to produce the state encoding. As long as the same ï¬ xed-length embedding is extracted, the same module can in practice route between programs related to sorting arrays just as easily as between programs related to rotating 3D objects. In the experimental sections, we provide details of the modality-speciï¬ c deep neural networks that we use to produce these ï¬ xed-length state vectors. 3.2 TRAINING To train we use execution traces â | 1511.06279#13 | 1511.06279#15 | 1511.06279 | [
"1511.04834"
] |
1511.06279#15 | Neural Programmer-Interpreters | ¬;"â : {ez, iz, ax} and £2 : {iz41, ar41, 7}, t = 1,...T, where T is the sequence length. Program IDs i, and i,+1 are row-indices in M*°Y and M°*°S of the programs to run at time ¢ and t+ 1, respectively. We propose to directly maximize the probability of the correct execution trace output ⠬°â â conditioned on â ¬'â ?: @* θâ = arg max log P (ξout|ξinp; θ) (6) θ (ξinp,ξout) where θ are the parameters of our model. Since the traces are variable in length depending on the input, we apply the chain rule to model the joint probability over ξout T log P(Eoutl&inp: 9) = > log P(E EL? on 61"? 8) (7) t=1 Note that for many problems the input history ξinp is critical to deciding future actions 1 because the environment observation at the current time-step et alone does not contain enough in- formation. The hidden unit activations of the LSTM in NPI are capable of capturing these temporal dependencies. The single-step conditional probability in equation (7) can be factorized into three further conditional distributions, corresponding to predicting the next program, next arguments, and whether to halt execution: |ξinp 1 (8) where ht is the output of flstm at time t, carrying information from previous time steps. We train by gradient ascent on the likelihood in equation (7). We used an adaptive curriculum in which training examples for each mini-batch are fetched with fre- quency proportional to the modelâ s current prediction error for the corresponding program. Specif- ically, we set the sampling frequency using a softmax over average prediction error across all pro- grams, with conï¬ gurable temperature. Every 1000 steps of training we re-estimated these prediction errors. Intuitively, this forces the model to focus on learning the program for which it currently per- forms worst in executing. We found that the adaptive curriculum immediately worked much better than our best-performing hand-designed curriculum, allowing a multi-task NPI to achieve compara- ble performance to single-task NPI on all tasks. | 1511.06279#14 | 1511.06279#16 | 1511.06279 | [
"1511.04834"
] |
1511.06279#16 | Neural Programmer-Interpreters | We also note that our program has a distinct memory advantage over basic LSTMs because all sub- programs can be trained in parallel. For programs whose execution length grows e.g. quadratically 5 Published as a conference paper at ICLR 2016 Figure 3: Illustration of the addition environment used in our experiments. (a) Example scratch pad and pointers used for computing â 96 + 125 = 221â . Carry step is being implemented. (b) Actual trace of addition program generated by our model on the problem shown to the left. Note that we substituted the ACT calls in the trace with more human-readable steps. | 1511.06279#15 | 1511.06279#17 | 1511.06279 | [
"1511.04834"
] |
1511.06279#17 | Neural Programmer-Interpreters | with the input sequence length, an LSTM will by highly constrained by device memory to train on short sequences. By exploiting compositionality, an effective curriculum can often be developed with sublinear-length subprograms, enabling our NPI model to train on order of magnitude larger sequences than the LSTM. # 4 EXPERIMENTS This section describes the environment and state encoder function for each task, and shows example outputs and prediction accuracy results. For all tasks, the core LSTM had two layers of size 256. We trained the NPI using the ADAM solver (Kingma & Ba, 2015) with base learning rate 0.0001, batch size 1, and decayed the learning rate by a factor of 0.95 every 10,000 steps. | 1511.06279#16 | 1511.06279#18 | 1511.06279 | [
"1511.04834"
] |
1511.06279#18 | Neural Programmer-Interpreters | # 4.1 TASK AND ENVIRONMENT DESCRIPTIONS In this section we provide an overview of the tasks used to evaluate our model. Table 2 in the appendix provides a full listing of all the programs and subprograms learned by our model. # ADDITION The task in this environment is to read in the digits of two base-10 numbers and produce the digits of the answer. Our goal is to teach the model the standard (at least in the US) grade school algorithm of adding, in which one works from right to left applying single-digit add and carry operations. | 1511.06279#17 | 1511.06279#19 | 1511.06279 | [
"1511.04834"
] |
1511.06279#19 | Neural Programmer-Interpreters | In this environment, the network is endowed with a â scratch padâ with which to store intermediate computations; e.g. to record carries. There are four pointers; one for each of the two input numbers, one for the carry, and another to write the output. At each time step, a pointer can be moved left or right, or it can record a value to the pad. Figure 3a illustrates the environment of this model, and Figure 3b provides a real execution trace generated by our model. For the state encoder fenc, the model is allowed a view of the scratch pad from the perspective of each of the four pointers. That is, the model sees the current values at pointer locations of the two inputs, the carry row and the output row, as 1-of-K encodings, where K is 10 because we are working in base 10. We also append the values of the input argument tuple at: fenc(Q, i1, i2, i3, i4, at) = M LP ([Q(1, i1), Q(2, i2), Q(3, i3), Q(4, i4), at(1), at(2), at(3)]) (9) where Q â R4à N à K, and i1, ..., i4 are pointers, one per scratch pad row. The ï¬ rst dimension of Q corresponds to scratch pad rows, N is the number of columns (digits) and K is the one-hot encoding dimension. To begin the ADD program, we set the initial arguments to a default value and initialize all pointers to be at the rightmost column. The only subprogram with non-default arguments is ACT, in which case the arguments indicate an action to be taken by a speciï¬ ed pointer. # SORTING In this section we apply our model to a setting with potentially much longer execution traces: sorting an array of numbers using bubblesort. As in the case of addition we can use a scratch pad to store intermediate states of the array. | 1511.06279#18 | 1511.06279#20 | 1511.06279 | [
"1511.04834"
] |
1511.06279#20 | Neural Programmer-Interpreters | We deï¬ ne the encoder as follows: fenc(Q, i1, i2, at) = M LP ([Q(1, i1), Q(1, i2), at(1), at(2), at(3)]) (10) 6 Published as a conference paper at ICLR 2016 Figure 4: Illustration of the sorting environment used in our experiments. (a) Example scratch pad and pointers used for sorting. Several steps of the BUBBLE subprogram are shown. (b) Excerpt from the trace of the learned bubblesort program. where Q â R1à N à K is the pad, N is the array length and K is the array entry embedding dimension. Figure 4 shows an example series of array states and an excerpt of an execution trace. | 1511.06279#19 | 1511.06279#21 | 1511.06279 | [
"1511.04834"
] |
1511.06279#21 | Neural Programmer-Interpreters | # CANONICALIZING 3D MODELS We also apply our model to a vision task with a very different perceptual environment - pixels. Given a rendering of a 3D car, we would like to learn a visual program that â canonicalizesâ the model with respect to its pose. Whatever the starting position, the program should generate a trajectory of actions that delivers the camera to the target view, e.g. frontal pose at a 15â ¦ elevation. For training data, we used renderings of the 3D car CAD models from (Fidler et al., 2012). | 1511.06279#20 | 1511.06279#22 | 1511.06279 | [
"1511.04834"
] |
1511.06279#22 | Neural Programmer-Interpreters | This is a nontrivial problem because different starting positions will require quite different trajec- tories to reach the target. Further complicating the problem is the fact that the model will need to generalize to different car models than it saw during training. We again use a scratch pad, but here it is a very simple read-only pad that only contains a target camera elevation and azimuth â i.e., the â canonical poseâ . Since observations come in the form of image pixels, we use a convolutional neural network fCN N as the image encoder: fenc(Q, x, i1, i2, at) = M LP ([Q(1, i1), Q(2, i2), fCN N (x), at(1), at(2), at(3)]) where x â RHà W à 3 is a car rendering at the current pose, Q â R2à 1à K is the pad containing canonical azimuth and elevation, i1, i2 are the (ï¬ xed at 1) pointer locations, and K is the one-hot encoding dimension of pose coordinates. We set K = 24 corresponding to 15â ¦ pose increments. Note, critically, that our NPI model only has access to pixels of the rendering and the target pose, and is not provided the pose of query frames. We are also aware that one solution to this problem would be to train a pose classiï¬ er network and then ï¬ nd the shortest path to canonical pose via classical methods. That is also a sensible approach. However, our purpose here is to show that our method generalizes beyond the scratch pad domain to detailed images of 3D objects, and also to other environments with a single multi-task model. # 4.2 SAMPLE COMPLEXITY AND GENERALIZATION Both LSTMs and Neural Turing Machines can learn to perform sorting to a limited degree, although they have not been shown to generalize well to much longer arrays than were seen during training. However, we are interested not only in whether sorting can be accomplished, but whether a particular sorting algorithm (e.g. bubblesort) can be learned by the model, and how effectively in terms of sample complexity and generalization. We compare the generalization ability of our model to a ï¬ at sequence-to-sequence LSTM (Sutskever et al., 2014), using the same number of layers (2) and hidden units (256). | 1511.06279#21 | 1511.06279#23 | 1511.06279 | [
"1511.04834"
] |
1511.06279#23 | Neural Programmer-Interpreters | Note that a ï¬ at 2 version of NPI could also learn sorting of short arrays, but because bubblesort runs in O(N 2) for arrays of length N , the execution traces quickly become far too long to store the required number of LSTM states in memory. Our NPI architecture can train on much larger arrays by exploiting compositional structure; the memory requirements of any given subprogram can be restricted to O(N ). 2By ï¬ at in this case, we mean non-compositional, not making use of subprograms, and only making calls to ACT in order to swap values and move pointers. 7 | 1511.06279#22 | 1511.06279#24 | 1511.06279 | [
"1511.04834"
] |
1511.06279#24 | Neural Programmer-Interpreters | Published as a conference paper at ICLR 2016 Sorting per-sequence accuracy vs. # training examples *â 2â _+_e a i TT # Training examples he Seq?Seq â ® NPI Sorting per-sequence accuracy vs sequence length r. + oe 2 ¢ 100 I I I Training l 50 sequence lengths l 25 ! i] ! 9 S20 a3 Sequence length a Seq?Seq â ® NPI Figure 5: Sample complexity. Test accuracy of sequence-to-sequence LSTM versus NPI on length-20 arrays of single-digit numbers. Note that NPI is able to mine and train on subprogram traces from each bubblesort example. | 1511.06279#23 | 1511.06279#25 | 1511.06279 | [
"1511.04834"
] |
1511.06279#25 | Neural Programmer-Interpreters | Figure 6: Strong vs. weak generalization. Test accuracy of sequence-to-sequence LSTM ver- sus NPI on varying-length arrays of single-digit numbers. Both models were trained on arrays of single-digit numbers up to length 20. A strong indicator of whether a neural network has learned a program well is whether it can run the program on inputs of previously-unseen sizes. To evaluate this property, we train both the sequence- to-sequence LSTM and NPI to perform bubblesort on arrays of single-digit numbers from length 2 to length 20. | 1511.06279#24 | 1511.06279#26 | 1511.06279 | [
"1511.04834"
] |
1511.06279#26 | Neural Programmer-Interpreters | Compared to ï¬ xed-length inputs this raises the challenge level during training, but in exchange we can get a more ï¬ exible and generalizable sorting program. To handle variable-sized inputs, the state representation must have some information about input se- quence length and the number of steps taken so far. For example, the main BUBBLESORT program naturally needs to call its helper function BUBBLE a number of times dependent on the sequence length. We enable this in our model by adding a third pointer that acts as a counter; each time BUB- BLE is called the pointer is advanced by one step. The scratch pad environment also provides a bit indicating whether a pointer is at the start or end of a sequence, equivalent in purpose to end tokens used in a sequence-to-sequence model. For each length, we provided 64 example bubblesort traces, for a total of 1,216 examples. Then, we evaluated whether the network can learn to sort arrays beyond length 20. We found that the trained model generalizes well, and is capable of sorting arrays up to size 60; see Figure 6. At 60 and beyond, we observed a failure mode in which sweeps of pointers across the array would take the wrong number of steps, suggesting that the limiting performance factor is related to counting. In stark contrast, when provided with the 1,216 examples, the sequence-to-sequence LSTMs fail to generalize beyond arrays of length 25 as shown in Figure 6. To study sample complexity further, we ï¬ x the length of the arrays to 20 and vary the number of training examples. We see in Figure 5 that NPI starts learning with 2 examples and is able to sort almost perfectly with only 8 examples. The sequence-to-sequence model on the other hand requires 64 examples to start learning and only manages to sort well with over 250 examples. Figure 7 shows several example canonicalization trajectories generated by our model, starting from the leftmost car. The image encoder was a convolutional network with three passes of stride-2 convolution and pooling, trained on renderings of size 128 à | 1511.06279#25 | 1511.06279#27 | 1511.06279 | [
"1511.04834"
] |
1511.06279#27 | Neural Programmer-Interpreters | 128. The canonical target pose in this case is frontal with 15â ¦ elevation. At test time, from an initial rendering, NPI is able to canonicalize cars of varying appearance from multiple starting positions. Importantly, it can generalize to car appearances not encountered in the training set as shown in Figure 7. # 4.3 LEARNING NEW PROGRAMS WITH A FIXED CORE One challenge for continual learning of neural-network-based agents is that training on new tasks and experiences can lead to degraded performance in old tasks. The learning of new tasks may require that the network weights change substantially, so care must be taken to avoid catastrophic forgetting (Mccloskey & Cohen, 1989; OReilly et al., 2014). | 1511.06279#26 | 1511.06279#28 | 1511.06279 | [
"1511.04834"
] |
1511.06279#28 | Neural Programmer-Interpreters | Using NPI, one solution is to ï¬ x the weights of the core routing module, and only make sparse updates to the program memory. When adding a new program the core moduleâ s routing computation will be completely unaffected; all the learning for a new task occurs in program embedding space. Of course, the addition of new programs to the memory adds a new choice of program at each time step, and an old program could 8 Published as a conference paper at ICLR 2016 GOTO 12 Co 22 1 2 3 HGOTO BGOTO) LGoTo aia aria â Gee RGOTO ' 2 3 ACT (LEFT) ASKGRERW) = a & ACT (LEFT) 4 5 6 atecrd ACT (LEFT) ACT (UP) ACT (LEFT) fips Sih r_.) ACT(LEFT) 7 GOTO 1 2 1 2 3 VGOTO HGOTO UGOTO pe RGOTO Se, a @ ACT (UP) ACT (RIGHT) ACT (RIGHT) GOTO 1 2 ACT (RIGHT) HGOTO 1 A ; VvGOTO 4 5 6 LGOTO _ _ __ DGOTO ACT(LEFT) te? ama ea ACT (DOWN) 2 VGOTO hs t=3 = ACT (DOWN) 2607 oun) Figure 7: Example canonicalization of several different test set cars. The network is able to generate and execute the appropriate plan based on the starting car image. This NPI was trained on trajectories starting at azimuth (â 75â ¦...75â ¦) , elevation (0â ¦...60â ¦) in 15â ¦ increments. The training trajectories target azimuth 0â ¦ and elevation 15â ¦, as in the generated traces above. | 1511.06279#27 | 1511.06279#29 | 1511.06279 | [
"1511.04834"
] |
1511.06279#29 | Neural Programmer-Interpreters | mistakenly call a newly added program. To overcome this, when learning a new set of program vectors with a ï¬ xed core, in practice we train not only on example traces of the new program, but also traces of existing programs. Alternatively, a simpler approach is to prevent existing programs from calling subsequently added programs, allowing addition of new programs without ever looking back at training data for known programs. In either case, note that only the memory slots of the new programs are updated, and all other weights, including other program embeddings, are ï¬ | 1511.06279#28 | 1511.06279#30 | 1511.06279 | [
"1511.04834"
] |
1511.06279#30 | Neural Programmer-Interpreters | xed. Table 1 shows the result of adding a maximum-ï¬ nding program MAX to a multitask NPI trained on addition, sorting and canonicalization. MAX ï¬ rst calls BUBBLESORT and then a new program RJMP, which moves pointers to the right of the sorted array, where the max element can be read. During training we froze all weights except for the two newly-added program embeddings. We ï¬ nd that NPI learns MAX perfectly without forgetting the other tasks. In particular, after training a single multi-task model as outlined in the following section, learning the MAX program with this ï¬ xed-core multi-task NPI results in no performance deterioration for all three tasks. 4.4 SOLVING MULTIPLE TASKS WITH A SINGLE NETWORK In this section we perform a controlled experiment to compare the performance of a multi-task NPI with several single-task NPI models. Table 1 shows the results for addition, sorting and canonical- izing 3D car models. We trained and evaluated on 10-digit numbers for addition, length-5 arrays for sorting, and up to four-step trajectories for canonicalization. As shown in Table 1, one multi-task NPI can learn all three programs (and necessarily the 21 subprograms) with comparable accuracy compared to each single-task NPI. Task Addition Sorting Canon. seen car Canon. unseen Maximum Single Multi 97.0 100.0 100.0 100.0 91.4 89.5 89.9 88.7 - - + Max 97.0 100.0 91.4 89.9 100.0 Table 1: Per-sequence % accuracy. â + Maxâ indicates performance after addition of the ad- ditional max-ï¬ nding subprograms to memory. â unseenâ uses a test set with disjoint car mod- els from the training set, while â seen carâ uses the same car models but different trajectories. | 1511.06279#29 | 1511.06279#31 | 1511.06279 | [
"1511.04834"
] |
1511.06279#31 | Neural Programmer-Interpreters | # 5 CONCLUSION We have shown that the NPI can learn programs in very dissimilar environments with different affordances. In the context of sorting we showed that NPI exhibits very strong generalization in comparison to sequence-to-sequence LSTMs. We also showed how a trained NPI with a ï¬ xed core can continue to learn new programs without forgetting already learned programs. ACKNOWLEDGMENTS We sincerely thank Arun Nair and Ed Grefenstette for helpful suggestions. 9 | 1511.06279#30 | 1511.06279#32 | 1511.06279 | [
"1511.04834"
] |
1511.06279#32 | Neural Programmer-Interpreters | Published as a conference paper at ICLR 2016 # REFERENCES Anderson, Michael L. Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences, 33:245â 266, 8 2010. Andre, David and Russell, Stuart J. Programmable reinforcement learning agents. In Advances in Neural Information Processing Systems, pp. 1019â 1025. 2001. Banzhaf, Wolfgang, Nordin, Peter, Keller, Robert E, and Francone, Frank D. | 1511.06279#31 | 1511.06279#33 | 1511.06279 | [
"1511.04834"
] |
1511.06279#33 | Neural Programmer-Interpreters | Genetic programming: An introduction, volume 1. Morgan Kaufmann San Francisco, 1998. Dietterich, Thomas G. Hierarchical reinforcement learning with the MAXQ value function decom- position. Journal of Artiï¬ cial Intelligence Research, 13:227â 303, 2000. Donnarumma, Francesco, Prevete, Roberto, and Trautteur, Giuseppe. Programming in the brain: A neural network theoretical framework. Connection Science, 24(2-3):71â 90, 2012. | 1511.06279#32 | 1511.06279#34 | 1511.06279 | [
"1511.04834"
] |
1511.06279#34 | Neural Programmer-Interpreters | Donnarumma, Francesco, Prevete, Roberto, Chersi, Fabian, and Pezzulo, Giovanni. A programmer- interpreter neural network architecture for prefrontal cognitive control. International Journal of Neural Systems, 25(6):1550017, 2015. Fidler, Sanja, Dickinson, Sven, and Urtasun, Raquel. 3D object detection and viewpoint estimation with a deformable 3D cuboid model. In Advances in neural information processing systems, 2012. Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural Turing machines. arXiv preprint arXiv:1410.5401, 2014. Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural computation, 9(8): 1735â 1780, 1997. Joulin, Armand and Mikolov, Tomas. | 1511.06279#33 | 1511.06279#35 | 1511.06279 | [
"1511.04834"
] |
1511.06279#35 | Neural Programmer-Interpreters | Inferring algorithmic patterns with stack-augmented recurrent nets. In NIPS, 2015. Kaiser, Å ukasz and Sutskever, Ilya. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015. Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. 2015. Kolter, Zico, Abbeel, Pieter, and Ng, Andrew Y. Hierarchical apprenticeship learning with appli- In Advances in Neural Information Processing Systems, pp. cation to quadruped locomotion. 769â 776. 2008. Kurach, Karol, Andrychowicz, Marcin, and Sutskever, Ilya. Neural random-access machines. arXiv preprint arXiv:1511.06392, 2015. Mccloskey, Michael and Cohen, Neal J. Catastrophic interference in connectionist networks: The sequential learning problem. In The psychology of learning and motivation, volume 24, pp. 109â 165. 1989. | 1511.06279#34 | 1511.06279#36 | 1511.06279 | [
"1511.04834"
] |
1511.06279#36 | Neural Programmer-Interpreters | Mou, Lili, Li, Ge, Liu, Yuxuan, Peng, Hao, Jin, Zhi, Xu, Yan, and Zhang, Lu. Building program vector representations for deep learning. arXiv preprint arXiv:1409.3358, 2014. Neelakantan, Arvind, Le, Quoc V, and Sutskever, Ilya. Neural programmer: Inducing latent pro- grams with gradient descent. arXiv preprint arXiv:1511.04834, 2015. OReilly, Randall C., Bhattacharyya, Rajan, Howard, Michael D., and Ketz, Nicholas. Complemen- tary learning systems. Cognitive Science, 38(6):1229â | 1511.06279#35 | 1511.06279#37 | 1511.06279 | [
"1511.04834"
] |
1511.06279#37 | Neural Programmer-Interpreters | 1248, 2014. Rothkopf, ConstantinA. and Ballard, DanaH. Modular inverse reinforcement learning for visuomo- tor behavior. Biological Cybernetics, 107(4):477â 490, 2013. Rumelhart, D. E., Hinton, G. E., and McClelland, J. L. Parallel distributed processing: Explorations in the microstructure of cognition, vol. 1. chapter A General Framework for Parallel Distributed Processing, pp. 45â | 1511.06279#36 | 1511.06279#38 | 1511.06279 | [
"1511.04834"
] |
1511.06279#38 | Neural Programmer-Interpreters | 76. MIT Press, 1986. 10 Published as a conference paper at ICLR 2016 Schaul, Tom, Horgan, Daniel, Gregor, Karol, and Silver, David. Universal value function approxi- mators. In International Conference on Machine Learning, 2015. Schmidhuber, J¨urgen. Learning to control fast-weight memories: An alternative to dynamic recur- rent networks. Neural Computation, 4(1):131â 139, 1992. Schneider, Walter and Chein, Jason M. Controlled and automatic processing: behavior, theory, and biological mechanisms. Cognitive Science, 27(3):525â 559, 2003. Subramanian, Kaushik, Isbell, Charles, and Thomaz, Andrea. | 1511.06279#37 | 1511.06279#39 | 1511.06279 | [
"1511.04834"
] |
1511.06279#39 | Neural Programmer-Interpreters | Learning options through human interaction. In IJCAI Workshop on Agents Learning Interactively from Human Teachers, 2011. Sutskever, Ilya and Hinton, Geoffrey E. Using matrices to model symbolic relationship. In Advances in Neural Information Processing Systems, pp. 1593â 1600. 2009. Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc VV. Sequence to sequence learning with neural net- works. In Advances in neural information processing systems, pp. 3104â 3112, 2014. Sutton, Richard S., Precup, Doina, and Singh, Satinder. | 1511.06279#38 | 1511.06279#40 | 1511.06279 | [
"1511.04834"
] |
1511.06279#40 | Neural Programmer-Interpreters | Between MDPs and semi-MDPs: A frame- work for temporal abstraction in reinforcement learning. Artiï¬ cial Intelligence, 112(1-2):181â 211, 1999. Vinyals, Oriol, Fortunato, Meire, and Jaitly, Navdeep. Pointer networks. Advances in Neural Infor- mation Processing Systems (NIPS), 2015. Zaremba, Wojciech and Sutskever, Ilya. Learning to execute. arXiv preprint arXiv:1410.4615, 2014. Zaremba, Wojciech and Sutskever, Ilya. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 2015. Zaremba, Wojciech, Mikolov, Tomas, Joulin, Armand, and Fergus, Rob. Learning simple algorithms from examples. arXiv preprint arXiv:1511.07275, 2015. | 1511.06279#39 | 1511.06279#41 | 1511.06279 | [
"1511.04834"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.