id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
1510.01378#18 | Batch Normalized Recurrent Neural Networks | the network. However, it also overï¬ ts more than the baseline version. The best results are reported in table 2. For both experiments we observed a faster training and a greater overï¬ tting when using our version of batch normalization. This last effect is less prevalent in the speech experiment, perhaps because the training set is way bigger, or perhaps because the frame-wise normalization is less effective than the sequence-wise one. in the language modeling task we predict one character at a time, whereas we predict the whole sequence in the speech experiment. Batch normalization also allows for higher learning rates in feedforward networks, however since we only applied batch normalization to parts of the network, higher learning rates didnâ t work well because they affected un-normalized parts as well. Our experiments suggest that applying batch normalization to the input-to-hidden connections in RNNs can improve the conditioning of the optimization problem. Future directions include whiten- ing input-to-hidden connections [10] and normalizing the hidden state instead of just a portion of the network. | 1510.01378#17 | 1510.01378#19 | 1510.01378 | [
"1502.03167"
] |
1510.01378#19 | Batch Normalized Recurrent Neural Networks | 6 # Acknowledgments Part of this work was funded by Samsung. We also want to thank Nervana Systems for providing GPUs. # References [1] Sergey Ioffe and Christian Szegedy, â Batch normalization: Accelerating deep network training by reducing internal covariate shift,â arXiv preprint arXiv:1502.03167, 2015. [2] Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton, â Speech recognition with deep recurrent neural networks,â in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 2013, pp. 6645â 6649. [3] Ilya Sutskever, Oriol Vinyals, and Quoc Le, â Sequence to sequence learning with neural networks,â in Advances in Neural Information Processing Systems, 2014, pp. 3104â 3112. [4] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, â Neural machine translation by jointly learning to align and translate,â arXiv preprint arXiv:1409.0473, 2014. | 1510.01378#18 | 1510.01378#20 | 1510.01378 | [
"1502.03167"
] |
1510.01378#20 | Batch Normalized Recurrent Neural Networks | [5] Tom´aË s Mikolov, â Statistical language models based on neural networks,â Presentation at Google, Mountain View, 2nd April, 2012. [6] Sepp Hochreiter and J¨urgen Schmidhuber, â Long short-term memory,â Neural computation, vol. 9, no. 8, pp. 1735â 1780, 1997. [7] Will Williams, Niranjani Prasad, David Mrva, Tom Ash, and Tony Robinson, â Scaling recur- rent neural network language models,â arXiv preprint arXiv:1502.00512, 2015. [8] Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al., â Deepspeech: Scaling up end-to-end speech recognition,â arXiv preprint arXiv:1412.5567, 2014. [9] Yann A LeCun, L´eon Bottou, Genevieve B Orr, and Klaus-Robert M¨uller, â Efï¬ cient back- prop,â | 1510.01378#19 | 1510.01378#21 | 1510.01378 | [
"1502.03167"
] |
1510.01378#21 | Batch Normalized Recurrent Neural Networks | in Neural networks: Tricks of the trade, pp. 9â 48. Springer, 2012. [10] Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, and Koray Kavukcuoglu, â Natural neural networks,â arXiv preprint arXiv:1507.00210, 2015. [11] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhi- heng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei, â ImageNet Large Scale Visual Recognition Challenge,â International Journal of Computer Vision (IJCV), pp. 1â 42, April 2015. [12] Mike Schuster and Kuldip K Paliwal, â | 1510.01378#20 | 1510.01378#22 | 1510.01378 | [
"1502.03167"
] |
1510.01378#22 | Batch Normalized Recurrent Neural Networks | Bidirectional recurrent neural networks,â Signal Pro- cessing, IEEE Transactions on, vol. 45, no. 11, pp. 2673â 2681, 1997. [13] Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio, â How to construct deep recurrent neural networks,â arXiv preprint arXiv:1312.6026, 2013. [14] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio, â On the difï¬ culty of training recurrent neural networks,â arXiv preprint arXiv:1211.5063, 2012. [15] Felix A Gers, Nicol N Schraudolph, and J¨urgen Schmidhuber, â | 1510.01378#21 | 1510.01378#23 | 1510.01378 | [
"1502.03167"
] |
1510.01378#23 | Batch Normalized Recurrent Neural Networks | Learning precise timing with lstm recurrent networks,â The Journal of Machine Learning Research, vol. 3, pp. 115â 143, 2003. [16] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi- nov, â Dropout: A simple way to prevent neural networks from overï¬ tting,â The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929â 1958, 2014. [17] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals, â Recurrent neural network regulariza- tion,â arXiv preprint arXiv:1409.2329, 2014. [18] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio, â Theano: new features and speed improve- ments,â Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012. [19] B. van Merri¨enboer, D. Bahdanau, V. Dumoulin, D. Serdyuk, D. Warde-Farley, J. Chorowski, and Y. Bengio, â | 1510.01378#22 | 1510.01378#24 | 1510.01378 | [
"1502.03167"
] |
1510.01378#24 | Batch Normalized Recurrent Neural Networks | Blocks and Fuel: Frameworks for deep learning,â ArXiv e-prints, June 2015. 7 Model Train Valid Best Baseline 1.05 1.10 Best Batch Norm 1.07 1.11 Table 3: Best frame-wise crossentropy for the best baseline network and for the best batch normal- ized one. [20] Douglas B Paul and Janet M Baker, â The design for the wall street journal-based csr corpus,â in Proceedings of the workshop on Speech and Natural Language. Association for Computational Linguistics, 1992, pp. 357â 362. [21] Alan Graves, Navdeep Jaitly, and Abdel-rahman Mohamed, â Hybrid speech recognition with deep bidirectional lstm,â in Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on. IEEE, 2013, pp. 273â 278. [22] Xavier Glorot and Yoshua Bengio, â Understanding the difï¬ culty of training deep feedforward neural networks,â in International conference on artiï¬ cial intelligence and statistics, 2010, pp. 249â 256. [23] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini, â Building a large an- notated corpus of english: The penn treebank,â Computational linguistics, vol. 19, no. 2, pp. 313â 330, 1993. # A Experimentations with Normalization Inside the Recurrence | 1510.01378#23 | 1510.01378#25 | 1510.01378 | [
"1502.03167"
] |
1510.01378#25 | Batch Normalized Recurrent Neural Networks | In our ï¬ rst experiments we investigated if batch normalization can be applied in the same way as in a feedforward network (equation 17). We tried it on a language modelling task on the PennTreebank dataset, where the goal was to predict the next characters of a ï¬ xed length sequence of 100 symbols. The network is composed of a lookup table of dimension 250 followed by 3 layers of simple recur- rent networks with 250 hidden units each. A dimension 50 softmax layer is added on the top. In the batch normalized networks, we apply batch normalization to the hidden-to-hidden transition, as in equation 17, meaning that we compute one mean and one variance for each of the 250 features at each time step. For inference, we also keep track of the statistics for each time step. However, we used the same γ and β for each time step. The lookup table is randomly initialized using an isotropic Gaussian with zero mean and unit vari- ance. All the other matrices of the network are initialized using the Glorot scheme [22] and all the bias are set to zero. We used SGD with momentum. We performed a random search over the learn- ing rate (distributed in the range [0.0001, 1]), the momentum (with possible values of 0.5, 0.8, 0.9, 0.95, 0.995), and the batch size (32, 64 or 128). We let the experiment run for 20 epochs. A total of 52 experiments were performed. In every experiment that we ran, the performances of batch normalized networks were always slightly worse than (or at best equivalent to) the baseline networks, except for the ones where the learning rate is too high and the baseline diverges while the batch normalized one is still able to train. Figure 3 shows an example of a working experiment. We observed that in practically all the exper- iments that converged, the normalization was actually harming the performance. Table 3 shows the results of the best baseline and batch normalized networks. We can observe that both best networks have similar performances. The settings for the best baseline are: learning rate 0.42, momentum 0.95, batch size 32. The settings for the best batch normalized network are: learning rate 3.71e-4, momentum 0.995, batch size 128. | 1510.01378#24 | 1510.01378#26 | 1510.01378 | [
"1502.03167"
] |
1510.01378#26 | Batch Normalized Recurrent Neural Networks | Those results suggest that this way of applying batch normalization in the recurrent networks is not optimal. It seems that batch normalization hurts the training procedure. It may be due to the fact that we estimate new statistics at each time step, or because of the repeated application of γ and β during the recurrent procedure, which could lead to exploding or vanishing gradients. We will investigate more in depth what happens in the batch normalized networks, especially during the back-propagation. 8 4.5 Cross Entropy w w - ° u ° N u 2.0 1.5 â BLtrain â BN train 15 20 Figure 3: Typical training curves obtained during the grid search. The baseline network is in blue and batch normalized one in red. For this experiment, the hyper-parameters are: learning rate 7.8e-4, momentum 0.5, batch size 64. | 1510.01378#25 | 1510.01378#27 | 1510.01378 | [
"1502.03167"
] |
1510.01378#27 | Batch Normalized Recurrent Neural Networks | 9 | 1510.01378#26 | 1510.01378 | [
"1502.03167"
] |
|
1510.00149#0 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | 6 1 0 2 b e F 5 1 ] V C . s c [ 5 v 9 4 1 0 0 . 0 1 5 1 : v i X r a Published as a conference paper at ICLR 2016 DEEP COMPRESSION: COMPRESSING DEEP NEURAL NETWORKS WITH PRUNING, TRAINED QUANTIZATION AND HUFFMAN CODING # Song Han Stanford University, Stanford, CA 94305, USA [email protected] # Huizi Mao Tsinghua University, Beijing, 100084, China [email protected] | 1510.00149#1 | 1510.00149 | [
"1504.08083"
] |
|
1510.00149#1 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | William J. Dally Stanford University, Stanford, CA 94305, USA NVIDIA, Santa Clara, CA 95050, USA [email protected] # ABSTRACT Neural networks are both computationally intensive and memory intensive, making them difï¬ cult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce â deep compressionâ , a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35à to 49à without affecting their accuracy. Our method ï¬ | 1510.00149#0 | 1510.00149#2 | 1510.00149 | [
"1504.08083"
] |
1510.00149#2 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | rst prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, ï¬ nally, we apply Huffman coding. After the ï¬ rst two steps we retrain the network to ï¬ ne tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9à to 13à ; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35à , from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49à from 552MB to 11.3MB, again with no loss of accuracy. This allows ï¬ tting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3à | 1510.00149#1 | 1510.00149#3 | 1510.00149 | [
"1504.08083"
] |
1510.00149#3 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | to 4à layerwise speedup and 3à to 7à better energy efï¬ ciency. # INTRODUCTION Deep neural networks have evolved to the state-of-the-art technique for computer vision tasks (Krizhevsky et al., 2012)(Simonyan & Zisserman, 2014). Though these neural networks are very powerful, the large number of weights consumes considerable storage and memory bandwidth. For example, the AlexNet Caffemodel is over 200MB, and the VGG-16 Caffemodel is over 500MB (BVLC). | 1510.00149#2 | 1510.00149#4 | 1510.00149 | [
"1504.08083"
] |
1510.00149#4 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | This makes it difï¬ cult to deploy deep neural networks on mobile system. First, for many mobile-ï¬ rst companies such as Baidu and Facebook, various apps are updated via different app stores, and they are very sensitive to the size of the binary ï¬ les. For example, App Store has the restriction â apps above 100 MB will not download until you connect to Wi-Fiâ . As a result, a feature that increases the binary size by 100MB will receive much more scrutiny than one that increases it by 10MB. Although having deep neural networks running on mobile has many great | 1510.00149#3 | 1510.00149#5 | 1510.00149 | [
"1504.08083"
] |
1510.00149#5 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | 1 Published as a conference paper at ICLR 2016 Quantization: less bits per weight Pruning: less number of weights wo s Huffman Encoding t { 1 1 1 ' 1 1 ! ! 1 same | | same ' ' 1 ' 1 1 1 1 1 ' 1 1 original same ' network accuracy accuracy , accuracy 1 1 1 | t 1 original 1 9x-13x | (Quantize the Weightlex 1 27-31% | 1 35x-49x size {reduction 9 ireduction 1 with Code â reduction Book 1 fl i 1 / Figure 1: The three stage compression pipeline: pruning, quantization and Huffman coding. Pruning reduces the number of weights by 10Ã , while quantization further improves the compression rate: between 27Ã and 31Ã . Huffman coding gives more compression: between 35Ã and 49Ã . | 1510.00149#4 | 1510.00149#6 | 1510.00149 | [
"1504.08083"
] |
1510.00149#6 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | The compression rate already included the meta-data for sparse representation. The compression scheme doesnâ t incur any accuracy loss. features such as better privacy, less network bandwidth and real time processing, the large storage overhead prevents deep neural networks from being incorporated into mobile apps. The second issue is energy consumption. Running large neural networks require a lot of memory bandwidth to fetch the weights and a lot of computation to do dot productsâ which in turn consumes considerable energy. Mobile devices are battery constrained, making power hungry applications such as deep neural networks hard to deploy. Energy consumption is dominated by memory access. Under 45nm CMOS technology, a 32 bit ï¬ oating point add consumes 0.9pJ, a 32bit SRAM cache access takes 5pJ, while a 32bit DRAM memory access takes 640pJ, which is 3 orders of magnitude of an add operation. Large networks do not ï¬ t in on-chip storage and hence require the more costly DRAM accesses. Running a 1 billion connection neural network, for example, at 20fps would require (20Hz)(1G)(640pJ) = 12.8W just for DRAM access - well beyond the power envelope of a typical mobile device. Our goal is to reduce the storage and energy required to run inference on such large networks so they can be deployed on mobile devices. To achieve this goal, we present â deep compressionâ : a three- stage pipeline (Figure 1) to reduce the storage required by neural network in a manner that preserves the original accuracy. First, we prune the networking by removing the redundant connections, keeping only the most informative connections. Next, the weights are quantized so that multiple connections share the same weight, thus only the codebook (effective weights) and the indices need to be stored. Finally, we apply Huffman coding to take advantage of the biased distribution of effective weights. Our main insight is that, pruning and trained quantization are able to compress the network without interfering each other, thus lead to surprisingly high compression rate. It makes the required storage so small (a few megabytes) that all weights can be cached on chip instead of going to off-chip DRAM which is energy consuming. | 1510.00149#5 | 1510.00149#7 | 1510.00149 | [
"1504.08083"
] |
1510.00149#7 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Based on â deep compressionâ , the EIE hardware accelerator Han et al. (2016) was later proposed that works on the compressed model, achieving signiï¬ cant speedup and energy efï¬ ciency improvement. # 2 NETWORK PRUNING Network pruning has been widely studied to compress CNN models. In early work, network pruning proved to be a valid way to reduce the network complexity and over-ï¬ tting (LeCun et al., 1989; Hanson & Pratt, 1989; Hassibi et al., 1993; Str¨om, 1997). Recently Han et al. (2015) pruned state- of-the-art CNN models with no loss of accuracy. | 1510.00149#6 | 1510.00149#8 | 1510.00149 | [
"1504.08083"
] |
1510.00149#8 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | We build on top of that approach. As shown on the left side of Figure 1, we start by learning the connectivity via normal network training. Next, we prune the small-weight connections: all connections with weights below a threshold are removed from the network. Finally, we retrain the network to learn the ï¬ nal weights for the remaining sparse connections. Pruning reduced the number of parameters by 9à and 13à for AlexNet and VGG-16 model. 2 Published as a conference paper at ICLR 2016 Span Exceeds 8=243 im [o[i][2]s]4]s]s]7][e,[s]u[n][e[ul ule if 3 value 0 Filler Zero Figure 2: Representing the matrix sparsity with relative index. Padding ï¬ ller zero to prevent overï¬ ow. weights cluster index fine-tuned (32 bit float) (2 bit uint) centroids centroids 3 0 2 1 3] cluster | 1 1} 0 | 3 | 4 > of 3]1)o]r 3 1 2 2 | 0; lr gradient loroup by reduce > > Figure 3: Weight sharing by scalar quantization (top) and centroids ï¬ ne-tuning (bottom). We store the sparse structure that results from pruning using compressed sparse row (CSR) or compressed sparse column (CSC) format, which requires 2a + n + 1 numbers, where a is the number of non-zero elements and n is the number of rows or columns. To compress further, we store the index difference instead of the absolute position, and encode this difference in 8 bits for conv layer and 5 bits for fc layer. When we need an index difference larger than the bound, we the zero padding solution shown in Figure 2: in case when the difference exceeds 8, the largest 3-bit (as an example) unsigned number, we add a ï¬ | 1510.00149#7 | 1510.00149#9 | 1510.00149 | [
"1504.08083"
] |
1510.00149#9 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | ller zero. # 3 TRAINED QUANTIZATION AND WEIGHT SHARING Network quantization and weight sharing further compresses the pruned network by reducing the number of bits required to represent each weight. We limit the number of effective weights we need to store by having multiple connections share the same weight, and then ï¬ ne-tune those shared weights. Weight sharing is illustrated in Figure 3. Suppose we have a layer that has 4 input neurons and 4 output neurons, the weight is a 4 à 4 matrix. On the top left is the 4 à 4 weight matrix, and on the bottom left is the 4 à 4 gradient matrix. The weights are quantized to 4 bins (denoted with 4 colors), all the weights in the same bin share the same value, thus for each weight, we then need to store only a small index into a table of shared weights. During update, all the gradients are grouped by the color and summed together, multiplied by the learning rate and subtracted from the shared centroids from last iteration. For pruned AlexNet, we are able to quantize to 8-bits (256 shared weights) for each CONV layers, and 5-bits (32 shared weights) for each FC layer without any loss of accuracy. To calculate the compression rate, given k clusters, we only need log2(k) bits to encode the index. In general, for a network with n connections and each connection is represented with b bits, constraining the connections to have only k shared weights will result in a compression rate of: r = nb nlog2(k) + kb (1) For example, Figure 3 shows the weights of a single layer neural network with four input units and four output units. There are 4 à 4 = 16 weights originally but there are only 4 shared weights: similar weights are grouped together to share the same value. Originally we need to store 16 weights each | 1510.00149#8 | 1510.00149#10 | 1510.00149 | [
"1504.08083"
] |
1510.00149#10 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | 3 Published as a conference paper at ICLR 2016 20001 CF so Tinear quantization bor nonlinear quantization by density initialization °° clustring and finetuning finear initialization 15000 andom inkiazation 10000} a density 5000| cummulative alstribution 0.10 =B.05 3.00 05 Tio T0108 002 0.00 a0z 0.08 0.06 weight value weight value Figure 4: Left: | 1510.00149#9 | 1510.00149#11 | 1510.00149 | [
"1504.08083"
] |
1510.00149#11 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Three different methods for centroids initialization. Right: Distribution of weights (blue) and distribution of codebook before (green cross) and after ï¬ ne-tuning (red dot). has 32 bits, now we need to store only 4 effective weights (blue, green, red and orange), each has 32 bits, together with 16 2-bit indices giving a compression rate of 16 â 32/(4 â 32 + 2 â 16) = 3.2 3.1 WEIGHT SHARING We use k-means clustering to identify the shared weights for each layer of a trained network, so that all the weights that fall into the same cluster will share the same weight. Weights are not shared across layers. We partition n original weights W = {w1, we Wy} into k clusters C = {c1,c2,..., ck} n > k, so as to minimize the within-cluster sum of squares (WCSS): k arg min ) | Ss wâ ei? (2) i=1 wee; | 1510.00149#10 | 1510.00149#12 | 1510.00149 | [
"1504.08083"
] |
1510.00149#12 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Different from HashNet (Chen et al., 2015) where weight sharing is determined by a hash function before the networks sees any training data, our method determines weight sharing after a network is fully trained, so that the shared weights approximate the original network. INITIALIZATION OF SHARED WEIGHTS Centroid initialization impacts the quality of clustering and thus affects the networkâ s prediction accuracy. We examine three initialization methods: Forgy(random), density-based, and linear initialization. In Figure 4 we plotted the original weightsâ distribution of conv3 layer in AlexNet (CDF in blue, PDF in red). The weights forms a bimodal distribution after network pruning. On the bottom it plots the effective weights (centroids) with 3 different initialization methods (shown in blue, red and yellow). In this example, there are 13 clusters. Forgy (random) initialization randomly chooses k observations from the data set and uses these as the initial centroids. The initialized centroids are shown in yellow. Since there are two peaks in the bimodal distribution, Forgy method tend to concentrate around those two peaks. Density-based initialization linearly spaces the CDF of the weights in the y-axis, then ï¬ nds the horizontal intersection with the CDF, and ï¬ nally ï¬ nds the vertical intersection on the x-axis, which becomes a centroid, as shown in blue dots. This method makes the centroids denser around the two peaks, but more scatted than the Forgy method. Linear initialization linearly spaces the centroids between the [min, max] of the original weights. This initialization method is invariant to the distribution of the weights and is the most scattered compared with the former two methods. Larger weights play a more important role than smaller weights (Han et al., 2015), but there are fewer of these large weights. Thus for both Forgy initialization and density-based initialization, very few centroids have large absolute value which results in poor representation of these few large weights. Linear initialization does not suffer from this problem. The experiment section compares the accuracy | 1510.00149#11 | 1510.00149#13 | 1510.00149 | [
"1504.08083"
] |
1510.00149#13 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | 4 Published as a conference paper at ICLR 2016 3 8 100000 220000 75000 165000 50000 5 110000 8 25000 55000 oO oO 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 1°93 5 7 9 11 13 15 17 19 21 23 25 27 29 31 Weight Index (32 Effective Weights) Sparse Matrix Location Index (Max Diff is 32) Figure 5: Distribution for weight (Left) and index (Right). The distribution is biased. | 1510.00149#12 | 1510.00149#14 | 1510.00149 | [
"1504.08083"
] |
1510.00149#14 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | of different initialization methods after clustering and ï¬ ne-tuning, showing that linear initialization works best. 3.3 FEED-FORWARD AND BACK-PROPAGATION The centroids of the one-dimensional k-means clustering are the shared weights. There is one level of indirection during feed forward phase and back-propagation phase looking up the weight table. An index into the shared weight table is stored for each connection. During back-propagation, the gradient for each shared weight is calculated and used to update the shared weight. | 1510.00149#13 | 1510.00149#15 | 1510.00149 | [
"1504.08083"
] |
1510.00149#15 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | This procedure is shown in Figure 3. We denote the loss by L, the weight in the ith column and jth row by Wij, the centroid index of element Wi,j by Iij, the kth centroid of the layer by Ck. By using the indicator function 1(.), the gradient of the centroids is calculated as: OL OL OW;; OL Ss J (Liz = k) (3) ag aC OWij OC, OW; 4 HUFFMAN CODING A Huffman code is an optimal preï¬ x code commonly used for lossless data compression(Van Leeuwen, 1976). It uses variable-length codewords to encode source symbols. The table is derived from the occurrence probability for each symbol. More common symbols are represented with fewer bits. Figure 5 shows the probability distribution of quantized weights and the sparse matrix index of the last fully connected layer in AlexNet. Both distributions are biased: most of the quantized weights are distributed around the two peaks; the sparse matrix index difference are rarely above 20. Experiments show that Huffman coding these non-uniformly distributed values saves 20% â 30% of network storage. | 1510.00149#14 | 1510.00149#16 | 1510.00149 | [
"1504.08083"
] |
1510.00149#16 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | # 5 EXPERIMENTS We pruned, quantized, and Huffman encoded four networks: two on MNIST and two on ImageNet data-sets. The network parameters and accuracy-1 before and after pruning are shown in Table 1. The compression pipeline saves network storage by 35à to 49à across different networks without loss of accuracy. The total size of AlexNet decreased from 240MB to 6.9MB, which is small enough to be put into on-chip SRAM, eliminating the need to store the model in energy-consuming DRAM memory. Training is performed with the Caffe framework (Jia et al., 2014). Pruning is implemented by adding a mask to the blobs to mask out the update of the pruned connections. Quantization and weight sharing are implemented by maintaining a codebook structure that stores the shared weight, and group-by-index after calculating the gradient of each layer. Each shared weight is updated with all the gradients that fall into that bucket. Huffman coding doesnâ t require training and is implemented ofï¬ ine after all the ï¬ ne-tuning is ï¬ nished. | 1510.00149#15 | 1510.00149#17 | 1510.00149 | [
"1504.08083"
] |
1510.00149#17 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | 5.1 LENET-300-100 AND LENET-5 ON MNIST We ï¬ rst experimented on MNIST dataset with LeNet-300-100 and LeNet-5 network (LeCun et al., 1998). LeNet-300-100 is a fully connected network with two hidden layers, with 300 and 100 1Reference model is from Caffe model zoo, accuracy is measured without data augmentation 5 Published as a conference paper at ICLR 2016 Table 1: | 1510.00149#16 | 1510.00149#18 | 1510.00149 | [
"1504.08083"
] |
1510.00149#18 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | The compression pipeline can save 35Ã to 49Ã parameter storage with no loss of accuracy. Network Top-1 Error Top-5 Error Parameters Compress Rate LeNet-300-100 Ref LeNet-300-100 Compressed LeNet-5 Ref LeNet-5 Compressed AlexNet Ref AlexNet Compressed VGG-16 Ref VGG-16 Compressed 1.64% 1.58% 0.80% 0.74% 42.78% 42.78% 31.50% 31.17% - - - - 19.73% 19.70% 11.32% 10.91% 1070 KB 27 KB 1720 KB 44 KB 240 MB 6.9 MB 552 MB 11.3 MB 40Ã | 1510.00149#17 | 1510.00149#19 | 1510.00149 | [
"1504.08083"
] |
1510.00149#19 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | 39Ã 35Ã 49Ã Table 2: Compression statistics for LeNet-300-100. P: pruning, Q:quantization, H:Huffman coding. Layer ip1 ip2 ip3 Total #Weights 235K 30K 1K 266K Weights% (P) 8% 9% 26% 8%(12Ã ) Weight bits (P+Q) 6 6 6 6 Weight bits (P+Q+H) 4.4 4.4 4.3 5.1 Index bits (P+Q) 5 5 5 5 Index bits (P+Q+H) 3.7 4.3 3.2 3.7 Compress rate (P+Q) 3.1% 3.8% 15.7% 3.1% (32Ã ) Compress rate (P+Q+H) 2.32% 3.04% 12.70% 2.49% (40Ã ) Table 3: Compression statistics for LeNet-5. P: pruning, Q:quantization, H:Huffman coding. Layer conv1 conv2 ip1 ip2 Total #Weights 0.5K 25K 400K 5K 431K Weights% (P) 66% 12% 8% 19% 8%(12Ã ) Weight bits (P+Q) 8 8 5 5 5.3 Weight bits (P+Q+H) 7.2 7.2 4.5 5.2 4.1 Index bits (P+Q) 5 5 5 5 5 Index bits (P+Q+H) 1.5 3.9 4.5 3.7 4.4 Compress rate (P+Q) 78.5% 6.0% 2.7% 6.9% 3.05% (33Ã ) Compress rate (P+Q+H) 67.45% 5.28% 2.45% 6.13% 2.55% (39Ã ) | 1510.00149#18 | 1510.00149#20 | 1510.00149 | [
"1504.08083"
] |
1510.00149#20 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | neurons each, which achieves 1.6% error rate on Mnist. LeNet-5 is a convolutional network that has two convolutional layers and two fully connected layers, which achieves 0.8% error rate on Mnist. Table 2 and table 3 show the statistics of the compression pipeline. The compression rate includes the overhead of the codebook and sparse indexes. Most of the saving comes from pruning and quantization (compressed 32à ), while Huffman coding gives a marginal gain (compressed 40à ) 5.2 ALEXNET ON IMAGENET We further examine the performance of Deep Compression on the ImageNet ILSVRC-2012 dataset, which has 1.2M training examples and 50k validation examples. We use the AlexNet Caffe model as the reference model, which has 61 million parameters and achieved a top-1 accuracy of 57.2% and a top-5 accuracy of 80.3%. Table 4 shows that AlexNet can be compressed to 2.88% of its original size without impacting accuracy. There are 256 shared weights in each CONV layer, which are encoded with 8 bits, and 32 shared weights in each FC layer, which are encoded with only 5 bits. The relative sparse index is encoded with 4 bits. Huffman coding compressed additional 22%, resulting in 35à compression in total. 5.3 VGG-16 ON IMAGENET With promising results on AlexNet, we also looked at a larger, more recent network, VGG-16 (Si- monyan & Zisserman, 2014), on the same ILSVRC-2012 dataset. VGG-16 has far more convolutional layers but still only three fully-connected layers. Following a similar methodology, we aggressively compressed both convolutional and fully-connected layers to realize a signiï¬ cant reduction in the number of effective weights, shown in Table5. The VGG16 network as a whole has been compressed by 49à | 1510.00149#19 | 1510.00149#21 | 1510.00149 | [
"1504.08083"
] |
1510.00149#21 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | . Weights in the CONV layers are represented with 8 bits, and FC layers use 5 bits, which does not impact the accuracy. The two largest fully-connected layers can each be pruned to less than 1.6% of their original size. This reduction 6 Published as a conference paper at ICLR 2016 Table 4: Compression statistics for AlexNet. P: pruning, Q: quantization, H:Huffman coding. Layer conv1 conv2 conv3 conv4 conv5 fc6 fc7 fc8 Total #Weights 35K 307K 885K 663K 442K 38M 17M 4M 61M Weights% (P) 84% 38% 35% 37% 37% 9% 9% 25% 11%(9Ã ) Weight bits (P+Q) 8 8 8 8 8 5 5 5 5.4 Weight bits (P+Q+H) 6.3 5.5 5.1 5.2 5.6 3.9 3.6 4 4 Index bits (P+Q) 4 4 4 4 4 4 4 4 4 Index bits (P+Q+H) 1.2 2.3 2.6 2.5 2.5 3.2 3.7 3.2 3.2 Compress rate (P+Q) 32.6% 14.5% 13.1% 14.1% 14.0% 3.0% 3.0% 7.3% 3.7% (27Ã ) Compress rate (P+Q+H) 20.53% 9.43% 8.44% 9.11% 9.43% 2.39% 2.46% 5.85% 2.88% (35Ã ) Table 5: | 1510.00149#20 | 1510.00149#22 | 1510.00149 | [
"1504.08083"
] |
1510.00149#22 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Compression statistics for VGG-16. P: pruning, Q:quantization, H:Huffman coding. Layer conv1 1 conv1 2 conv2 1 conv2 2 conv3 1 conv3 2 conv3 3 conv4 1 conv4 2 conv4 3 conv5 1 conv5 2 conv5 3 fc6 fc7 fc8 Total #Weights 2K 37K 74K 148K 295K 590K 590K 1M 2M 2M 2M 2M 2M 103M 17M 4M 138M Weights% (P) 58% 22% 34% 36% 53% 24% 42% 32% 27% 34% 35% 29% 36% 4% 4% 23% 7.5%(13Ã | 1510.00149#21 | 1510.00149#23 | 1510.00149 | [
"1504.08083"
] |
1510.00149#23 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | ) Weigh bits (P+Q) 8 8 8 8 8 8 8 8 8 8 8 8 8 5 5 5 6.4 Weight bits (P+Q+H) 6.8 6.5 5.6 5.9 4.8 4.6 4.6 4.6 4.2 4.4 4.7 4.6 4.6 3.6 4 4 4.1 Index bits (P+Q) 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 Index bits (P+Q+H) 1.7 2.6 2.4 2.3 1.8 2.9 2.2 2.6 2.9 2.5 2.5 2.7 2.3 3.5 4.3 3.4 3.1 Compress rate (P+Q) 40.0% 9.8% 14.3% 14.7% 21.7% 9.7% 17.0% 13.1% 10.9% 14.0% 14.3% 11.7% 14.8% 1.6% 1.5% 7.1% 3.2% (31Ã ) Compress rate (P+Q+H) 29.97% 6.99% 8.91% 9.31% 11.15% 5.67% 8.96% 7.29% 5.93% 7.47% 8.00% 6.52% 7.79% 1.10% 1.25% 5.24% 2.05% (49Ã ) | 1510.00149#22 | 1510.00149#24 | 1510.00149 | [
"1504.08083"
] |
1510.00149#24 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | is critical for real time image processing, where there is little reuse of these layers across images (unlike batch processing). This is also critical for fast object detection algorithms where one CONV pass is used by many FC passes. The reduced layers will ï¬ t in an on-chip SRAM and have modest bandwidth requirements. Without the reduction, the bandwidth requirements are prohibitive. # 6 DISCUSSIONS 6.1 PRUNING AND QUANTIZATION WORKING TOGETHER Figure 6 shows the accuracy at different compression rates for pruning and quantization together or individually. When working individually, as shown in the purple and yellow lines, accuracy of pruned network begins to drop signiï¬ cantly when compressed below 8% of its original size; accuracy of quantized network also begins to drop signiï¬ cantly when compressed below 8% of its original size. But when combined, as shown in the red line, the network can be compressed to 3% of original size with no loss of accuracy. On the far right side compared the result of SVD, which is inexpensive but has a poor compression rate. The three plots in Figure 7 show how accuracy drops with fewer bits per connection for CONV layers (left), FC layers (middle) and all layers (right). Each plot reports both top-1 and top-5 accuracy. Dashed lines only applied quantization but without pruning; solid lines did both quantization and pruning. There is very little difference between the two. This shows that pruning works well with quantization. Quantization works well on pruned network because unpruned AlexNet has 60 million weights to quantize, while pruned AlexNet has only 6.7 million weights to quantize. Given the same amount of centroids, the latter has less error. | 1510.00149#23 | 1510.00149#25 | 1510.00149 | [
"1504.08083"
] |
1510.00149#25 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | 7 Published as a conference paper at ICLR 2016 â © Pruning + Quantization Pruning Only @ Quantization Only © SVD 0.5% 0.0% -0.5% -1.0% -1.5% -2.0% -2.5% 3.0% -3.5% -4.0% 4.5% Accuracy Loss 2% 5% 8% 11% 14% 17% 20% Model Size Ratio after Compression Figure 6: Accuracy v.s. compression rate under different compression methods. Pruning and quantization works best when combined. 4 top5, quantized only © tops, pruned + quantized topt, quantized only © topt, pruned + quantized 85% © top, quantized only © topS, pruned + quantized top, quantized only © tops, pruned + quantized topt, quantized only © topt, pruned + quantized topt, quantized only © topt, pruned + quantized 85% 85% 68% 88% 68% > > 5 sis F si F sam 3 20% B sem 3 os 8 s4% B sae Mo 2 < < 17% 17% 17% 0% 0% 0% ibit 2bits Sits dbits bits Gbits bits abits â bit 2bits bits bits Sbits Gbits Tots Abts â bit 2bits bits 4bits Shits Gbits 7bits Abts Number of bits per effective weight in all Number of bits per effective weight in all Number of bits per effective weight in FC layers Conv layers all layers © top, quantized only © topS, pruned + quantized topt, quantized only © topt, pruned + quantized 85% 68% > 5 sis 3 20% 8 s4% 2 17% 0% ibit 2bits Sits dbits bits Gbits bits abits Number of bits per effective weight in all FC layers | 1510.00149#24 | 1510.00149#26 | 1510.00149 | [
"1504.08083"
] |
1510.00149#26 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | top, quantized only © tops, pruned + quantized topt, quantized only © topt, pruned + quantized 85% 88% F si B sem B sae < 17% 0% â bit 2bits bits bits Sbits Gbits Tots Abts Number of bits per effective weight in all Conv layers 4 top5, quantized only © tops, pruned + quantized topt, quantized only © topt, pruned + quantized 85% 68% > F sam 3 os Mo < 17% 0% â bit 2bits bits 4bits Shits Gbits 7bits Abts Number of bits per effective weight in all layers Figure 7: | 1510.00149#25 | 1510.00149#27 | 1510.00149 | [
"1504.08083"
] |
1510.00149#27 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Pruning doesnâ t hurt quantization. Dashed: quantization on unpruned network. Solid: quantization on pruned network; Accuracy begins to drop at the same number of quantization bits whether or not the network has been pruned. Although pruning made the number of parameters less, quantization still works well, or even better(3 bits case on the left ï¬ gure) as in the unpruned network. â © uniform init + density init © random init â © uniform init + density init © random init 58% 81% 3 56% 3 79% 5 5 3 3 2 54% 2 76% T Bd So 52% So 74% 50% 71% 2bits bits 4bits Sbits 6bits 7bits 8bits 2bits bits 4bits Sbits 6bits 7bits 8bits Number of bits per effective weight Number of bits per effective weight â © uniform init + density init © random init 58% 3 56% 5 3 2 54% T So 52% 50% 2bits bits 4bits Sbits 6bits 7bits 8bits Number of bits per effective weight â © uniform init + density init © random init 81% 3 79% 5 3 2 76% Bd So 74% 71% 2bits bits 4bits Sbits 6bits 7bits 8bits Number of bits per effective weight Figure 8: Accuracy of different initialization methods. Left: top-1 accuracy. Right: top-5 accuracy. Linear initialization gives best result. | 1510.00149#26 | 1510.00149#28 | 1510.00149 | [
"1504.08083"
] |
1510.00149#28 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | The ï¬ rst two plots in Figure 7 show that CONV layers require more bits of precision than FC layers. For CONV layers, accuracy drops signiï¬ cantly below 4 bits, while FC layer is more robust: not until 2 bits did the accuracy drop signiï¬ cantly. 6.2 CENTROID INITIALIZATION Figure 8 compares the accuracy of the three different initialization methods with respect to top-1 accuracy (Left) and top-5 accuracy (Right). The network is quantized to 2 â ¼ 8 bits as shown on x-axis. Linear initialization outperforms the density initialization and random initialization in all cases except at 3 bits. The initial centroids of linear initialization spread equally across the x-axis, from the min value to the max value. That helps to maintain the large weights as the large weights play a more important role than smaller ones, which is also shown in network pruning Han et al. (2015). Neither random nor density-based initialization retains large centroids. With these initialization methods, large weights are clustered to the small centroids because there are few large weights. In contrast, linear initialization allows large weights a better chance to form a large centroid. | 1510.00149#27 | 1510.00149#29 | 1510.00149 | [
"1504.08083"
] |
1510.00149#29 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | 8 Published as a conference paper at ICLR 2016 ll CPU Dense (Basenline) Ml CPU Pruned ®@ GPU Dense ® GPUPruned lM TK1 Dense ® TK1 Pruned 1x8 if aa rf aa. a, dx 1.0x 1.0x 1x ok xed AlexNet_Fc6 = AlexNet_Fc7 = AlexNet_Fc8 + VGGNet_Fc6 VGGNet_Fc7 VGGNet_Fc8 Geo Mean 100x 3 g Speedup (normzlized to CPU) x eS x Figure 9: Compared with the original network, pruned network layer achieved 3à speedup on CPU, 3.5à on GPU and 4.2à on mobile GPU on average. Batch size = 1 targeting real time processing. Performance number normalized to CPU. ll CPU Dense (Baseline) Mi CPU Pruned ® GPU Dense M GPU Pruned @® TK1 Dense M@ TK1 Pruned TeRLLee AlexNet_Fc6 = AlexNet_Fc7 â AlexNet_Fc8 + VGGNet_Fc6 VGGNet_Fc7 VGGNet_Fc8 Geo Mean 100x Energy Efficiency (normzlized to CPU) Figure 10: Compared with the original network, pruned network layer takes 7à less energy on CPU, 3.3à less on GPU and 4.2à less on mobile GPU on average. Batch size = 1 targeting real time processing. Energy number normalized to CPU. 6.3 SPEEDUP AND ENERGY EFFICIENCY Deep Compression is targeting extremely latency-focused applications running on mobile, which requires real-time inference, such as pedestrian detection on an embedded processor inside an autonomous vehicle. Waiting for a batch to assemble signiï¬ cantly adds latency. So when bench- marking the performance and energy efï¬ ciency, we consider the case when batch size = 1. The cases of batching are given in Appendix A. Fully connected layer dominates the model size (more than 90%) and got compressed the most by Deep Compression (96% weights pruned in VGG-16). In state-of-the-art object detection algorithms such as fast R-CNN (Girshick, 2015), upto 38% computation time is consumed on FC layers on uncompressed model. | 1510.00149#28 | 1510.00149#30 | 1510.00149 | [
"1504.08083"
] |
1510.00149#30 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | So itâ s interesting to benchmark on FC layers, to see the effect of Deep Compression on performance and energy. Thus we setup our benchmark on FC6, FC7, FC8 layers of AlexNet and VGG-16. In the non-batched case, the activation matrix is a vector with just one column, so the computation boils down to dense / sparse matrix-vector multiplication for original / pruned model, respectively. Since current BLAS library on CPU and GPU doesnâ t support indirect look-up and relative indexing, we didnâ t benchmark the quantized model. We compare three different off-the-shelf hardware: the NVIDIA GeForce GTX Titan X and the Intel Core i7 5930K as desktop processors (same package as NVIDIA Digits Dev Box) and NVIDIA Tegra K1 as mobile processor. To run the benchmark on GPU, we used cuBLAS GEMV for the original dense layer. For the pruned sparse layer, we stored the sparse matrix in in CSR format, and used cuSPARSE CSRMV kernel, which is optimized for sparse matrix-vector multiplication on GPU. To run the benchmark on CPU, we used MKL CBLAS GEMV for the original dense model and MKL SPBLAS CSRMV for the pruned sparse model. To compare power consumption between different systems, it is important to measure power at a consistent manner (NVIDIA, b). For our analysis, we are comparing pre-regulation power of the entire application processor (AP) / SOC and DRAM combined. On CPU, the benchmark is running on single socket with a single Haswell-E class Core i7-5930K processor. CPU socket and DRAM power are as reported by the pcm-power utility provided by Intel. For GPU, we used nvidia-smi utility to report the power of Titan X. For mobile GPU, we use a Jetson TK1 development board and measured the total power consumption with a power-meter. We assume 15% AC to DC conversion loss, 85% regulator efï¬ ciency and 15% power consumed by peripheral components (NVIDIA, a) to report the AP+DRAM power for Tegra K1. 9 | 1510.00149#29 | 1510.00149#31 | 1510.00149 | [
"1504.08083"
] |
1510.00149#31 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Published as a conference paper at ICLR 2016 Table 6: Accuracy of AlexNet with different aggressiveness of weight sharing and quantization. 8/5 bit quantization has no loss of accuracy; 8/4 bit quantization, which is more hardware friendly, has negligible loss of accuracy of 0.01%; To be really aggressive, 4/2 bit quantization resulted in 1.99% and 2.60% loss of accuracy. #CONV bits / #FC bits 32bits / 32bits 8 bits / 5 bits 8 bits / 4 bits 4 bits / 2 bits Top-1 Error Top-5 Error 42.78% 42.78% 42.79% 44.77% 19.73% 19.70% 19.73% 22.33% Top-1 Error Increase - 0.00% 0.01% 1.99% Top-5 Error Increase - -0.03% 0.00% 2.60% The ratio of memory access over computation characteristic with and without batching is different. When the input activations are batched to a matrix the computation becomes matrix-matrix multipli- cation, where locality can be improved by blocking. Matrix could be blocked to ï¬ t in caches and reused efï¬ ciently. In this case, the amount of memory access is O(n2), and that of computation is O(n3), the ratio between memory access and computation is in the order of 1/n. In real time processing when batching is not allowed, the input activation is a single vector and the computation is matrix-vector multiplication. In this case, the amount of memory access is O(n2), and the computation is O(n2), memory access and computation are of the same magnitude (as opposed to 1/n). That indicates MV is more memory-bounded than MM. So reducing the memory footprint is critical for the non-batching case. Figure 9 illustrates the speedup of pruning on different hardware. There are 6 columns for each benchmark, showing the computation time of CPU / GPU / TK1 on dense / pruned network. Time is normalized to CPU. When batch size = 1, pruned network layer obtained 3à to 4à | 1510.00149#30 | 1510.00149#32 | 1510.00149 | [
"1504.08083"
] |
1510.00149#32 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | speedup over the dense network on average because it has smaller memory footprint and alleviates the data transferring overhead, especially for large matrices that are unable to ï¬ t into the caches. For example VGG16â s FC6 layer, the largest layer in our experiment, contains 25088 à 4096 à 4 Bytes â 400M B data, which is far from the capacity of L3 cache. In those latency-tolerating applications , batching improves memory locality, where weights could be blocked and reused in matrix-matrix multiplication. In this scenario, pruned network no longer shows its advantage. We give detailed timing results in Appendix A. Figure 10 illustrates the energy efï¬ ciency of pruning on different hardware. We multiply power consumption with computation time to get energy consumption, then normalized to CPU to get energy efï¬ ciency. When batch size = 1, pruned network layer consumes 3à to 7à less energy over the dense network on average. Reported by nvidia-smi, GPU utilization is 99% for both dense and sparse cases. 6.4 RATIO OF WEIGHTS, INDEX AND CODEBOOK Pruning makes the weight matrix sparse, so extra space is needed to store the indexes of non-zero elements. Quantization adds storage for a codebook. The experiment section has already included these two factors. Figure 11 shows the breakdown of three different components when quantizing four networks. Since on average both the weights and the sparse indexes are encoded with 5 bits, their storage is roughly half and half. The overhead of codebook is very small and often negligible. @ Weight @ Index © Codebook AlexNet VGGNet Lenet-300-100 Lenet-5 Figure 11: Storage ratio of weight, index and codebook. 10 | 1510.00149#31 | 1510.00149#33 | 1510.00149 | [
"1504.08083"
] |
1510.00149#33 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Published as a conference paper at ICLR 2016 Table 7: Comparison with other compression methods on AlexNet. (Collins & Kohli, 2014) reduced the parameters by 4à and with inferior accuracy. Deep Fried Convnets(Yang et al., 2014) worked on fully connected layers and reduced the parameters by less than 4à . SVD save parameters but suffers from large accuracy loss as much as 2%. Network pruning (Han et al., 2015) reduced the parameters by 9à , not including index overhead. On other networks similar to AlexNet, (Denton et al., 2014) exploited linear structure of convnets and compressed the network by 2.4à to 13.4à layer wise, with 0.9% accuracy loss on compressing a single layer. (Gong et al., 2014) experimented with vector quantization and compressed the network by 16à to 24à , incurring 1% accuracy loss. Top-1 Error Top-5 Error 42.78% 41.93% 42.90% 44.40% 44.02% 42.77% 42.78% 42.78% 19.73% - - - 20.56% 19.67% 19.70% 19.70% Parameters 240MB 131MB 64MB 61MB 47.6MB 27MB 8.9MB 6.9MB Compress Rate 1à 2à 3.7à 4à 5à 9à 27à 35à # 7 RELATED WORK Neural networks are typically over-parametrized, and there is signiï¬ cant redundancy for deep learning models(Denil et al., 2013). This results in a waste of both computation and memory usage. There have been various proposals to remove the redundancy: Vanhoucke et al. (2011) explored a ï¬ xed- point implementation with 8-bit integer (vs 32-bit ï¬ oating point) activations. Hwang & Sung (2014) proposed an optimization method for the ï¬ xed-point network with ternary weights and 3-bit activations. | 1510.00149#32 | 1510.00149#34 | 1510.00149 | [
"1504.08083"
] |
1510.00149#34 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Anwar et al. (2015) quantized the neural network using L2 error minimization and achieved better accuracy on MNIST and CIFAR-10 datasets.Denton et al. (2014) exploited the linear structure of the neural network by ï¬ nding an appropriate low-rank approximation of the parameters and keeping the accuracy within 1% of the original model. The empirical success in this paper is consistent with the theoretical study of random-like sparse networks with +1/0/-1 weights (Arora et al., 2014), which have been proved to enjoy nice properties (e.g. reversibility), and to allow a provably polynomial time algorithm for training. Much work has been focused on binning the network parameters into buckets, and only the values in the buckets need to be stored. HashedNets(Chen et al., 2015) reduce model sizes by using a hash function to randomly group connection weights, so that all connections within the same hash bucket share a single parameter value. In their method, the weight binning is pre-determined by the hash function, instead of being learned through training, which doesnâ | 1510.00149#33 | 1510.00149#35 | 1510.00149 | [
"1504.08083"
] |
1510.00149#35 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | t capture the nature of images. Gong et al. (2014) compressed deep convnets using vector quantization, which resulted in 1% accuracy loss. Both methods studied only the fully connected layer, ignoring the convolutional layers. There have been other attempts to reduce the number of parameters of neural networks by replacing the fully connected layer with global average pooling. The Network in Network architecture(Lin et al., 2013) and GoogLenet(Szegedy et al., 2014) achieves state-of-the-art results on several benchmarks by adopting this idea. However, transfer learning, i.e. reusing features learned on the ImageNet dataset and applying them to new tasks by only ï¬ ne-tuning the fully connected layers, is more difï¬ cult with this approach. This problem is noted by Szegedy et al. (2014) and motivates them to add a linear layer on the top of their networks to enable transfer learning. Network pruning has been used both to reduce network complexity and to reduce over-ï¬ | 1510.00149#34 | 1510.00149#36 | 1510.00149 | [
"1504.08083"
] |
1510.00149#36 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | tting. An early approach to pruning was biased weight decay (Hanson & Pratt, 1989). Optimal Brain Damage (LeCun et al., 1989) and Optimal Brain Surgeon (Hassibi et al., 1993) prune networks to reduce the number of connections based on the Hessian of the loss function and suggest that such pruning is more accurate than magnitude-based pruning such as weight decay. A recent work (Han et al., 2015) successfully pruned several state of the art large scale networks and showed that the number of parameters could be reduce by an order of magnitude. There are also attempts to reduce the number of activations for both compression and acceleration Van Nguyen et al. (2015). | 1510.00149#35 | 1510.00149#37 | 1510.00149 | [
"1504.08083"
] |
1510.00149#37 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | 11 Published as a conference paper at ICLR 2016 # 8 FUTURE WORK While the pruned network has been benchmarked on various hardware, the quantized network with weight sharing has not, because off-the-shelf cuSPARSE or MKL SPBLAS library does not support indirect matrix entry lookup, nor is the relative index in CSC or CSR format supported. So the full advantage of Deep Compression that ï¬ t the model in cache is not fully unveiled. A software solution is to write customized GPU kernels that support this. A hardware solution is to build custom ASIC architecture specialized to traverse the sparse and quantized network structure, which also supports customized quantization bit width. We expect this architecture to have energy dominated by on-chip SRAM access instead of off-chip DRAM access. | 1510.00149#36 | 1510.00149#38 | 1510.00149 | [
"1504.08083"
] |
1510.00149#38 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | # 9 CONCLUSION We have presented â Deep Compressionâ that compressed neural networks without affecting accuracy. Our method operates by pruning the unimportant connections, quantizing the network using weight sharing, and then applying Huffman coding. We highlight our experiments on AlexNet which reduced the weight storage by 35à without loss of accuracy. We show similar results for VGG-16 and LeNet networks compressed by 49à and 39à without loss of accuracy. This leads to smaller storage requirement of putting convnets into mobile app. After Deep Compression the size of these networks ï¬ t into on-chip SRAM cache (5pJ/access) rather than requiring off-chip DRAM memory (640pJ/access). This potentially makes deep neural networks more energy efï¬ cient to run on mobile. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. | 1510.00149#37 | 1510.00149#39 | 1510.00149 | [
"1504.08083"
] |
1510.00149#39 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | # REFERENCES Anwar, Sajid, Hwang, Kyuyeon, and Sung, Wonyong. Fixed point optimization of deep convolutional neural networks for object recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 1131â 1135. IEEE, 2015. Arora, Sanjeev, Bhaskara, Aditya, Ge, Rong, and Ma, Tengyu. | 1510.00149#38 | 1510.00149#40 | 1510.00149 | [
"1504.08083"
] |
1510.00149#40 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Provable bounds for learning some deep representations. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, pp. 584â 592, 2014. # BVLC. Caffe model zoo. URL http://caffe.berkeleyvision.org/model_zoo. Chen, Wenlin, Wilson, James T., Tyree, Stephen, Weinberger, Kilian Q., and Chen, Yixin. Compress- ing neural networks with the hashing trick. arXiv preprint arXiv:1504.04788, 2015. Collins, Maxwell D and Kohli, Pushmeet. Memory bounded deep convolutional networks. arXiv preprint arXiv:1412.1442, 2014. Denil, Misha, Shakibi, Babak, Dinh, Laurent, de Freitas, Nando, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pp. 2148â 2156, 2013. Denton, Emily L, Zaremba, Wojciech, Bruna, Joan, LeCun, Yann, and Fergus, Rob. | 1510.00149#39 | 1510.00149#41 | 1510.00149 | [
"1504.08083"
] |
1510.00149#41 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Exploiting linear structure within convolutional networks for efï¬ cient evaluation. In Advances in Neural Information Processing Systems, pp. 1269â 1277, 2014. Girshick, Ross. Fast r-cnn. arXiv preprint arXiv:1504.08083, 2015. Gong, Yunchao, Liu, Liu, Yang, Ming, and Bourdev, Lubomir. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014. Han, Song, Pool, Jeff, Tran, John, and Dally, William J. Learning both weights and connections for efï¬ cient neural networks. In Advances in Neural Information Processing Systems, 2015. Han, Song, Liu, Xingyu, Mao, Huizi, Pu, Jing, Pedram, Ardavan, Horowitz, Mark A, and Dally, William J. EIE: Efï¬ | 1510.00149#40 | 1510.00149#42 | 1510.00149 | [
"1504.08083"
] |
1510.00149#42 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | cient inference engine on compressed deep neural network. arXiv preprint arXiv:1602.01528, 2016. 12 Published as a conference paper at ICLR 2016 Hanson, Stephen Jos´e and Pratt, Lorien Y. Comparing biases for minimal network construction with back-propagation. In Advances in neural information processing systems, pp. 177â 185, 1989. Hassibi, Babak, Stork, David G, et al. | 1510.00149#41 | 1510.00149#43 | 1510.00149 | [
"1504.08083"
] |
1510.00149#43 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Second order derivatives for network pruning: Optimal brain surgeon. Advances in neural information processing systems, pp. 164â 164, 1993. Hwang, Kyuyeon and Sung, Wonyong. Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1. In Signal Processing Systems (SiPS), 2014 IEEE Workshop on, pp. 1â 6. IEEE, 2014. Jia, Yangqing, Shelhamer, Evan, Donahue, Jeff, Karayev, Sergey, Long, Jonathan, Girshick, Ross, Guadarrama, Sergio, and Darrell, Trevor. Caffe: | 1510.00149#42 | 1510.00149#44 | 1510.00149 | [
"1504.08083"
] |
1510.00149#44 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬ cation with deep convolutional neural networks. In NIPS, pp. 1097â 1105, 2012. LeCun, Yann, Denker, John S, Solla, Sara A, Howard, Richard E, and Jackel, Lawrence D. Optimal brain damage. In NIPs, volume 89, 1989. LeCun, Yann, Bottou, Leon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â 2324, 1998. | 1510.00149#43 | 1510.00149#45 | 1510.00149 | [
"1504.08083"
] |
1510.00149#45 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Lin, Min, Chen, Qiang, and Yan, Shuicheng. Network in network. arXiv:1312.4400, 2013. NVIDIA. Technical brief: NVIDIA jetson TK1 development kit bringing GPU-accelerated computing to embedded systems, a. URL http://www.nvidia.com. NVIDIA. Whitepaper: GPU-based deep learning inference: A performance and power analysis, b. URL http://www.nvidia.com/object/white-papers.html. Simonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. | 1510.00149#44 | 1510.00149#46 | 1510.00149 | [
"1504.08083"
] |
1510.00149#46 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Str¨om, Nikko. Phoneme probability estimation with dynamic sparsely connected artiï¬ cial neural networks. The Free Speech Journal, 1(5):1â 41, 1997. Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. Van Leeuwen, Jan. On the construction of huffman trees. In ICALP, pp. 382â 410, 1976. Van Nguyen, Hien, Zhou, Kevin, and Vemulapalli, Raviteja. | 1510.00149#45 | 1510.00149#47 | 1510.00149 | [
"1504.08083"
] |
1510.00149#47 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Cross-domain synthesis of medical images using efï¬ cient location-sensitive deep network. In Medical Image Computing and Computer- Assisted Interventionâ MICCAI 2015, pp. 677â 684. Springer, 2015. Vanhoucke, Vincent, Senior, Andrew, and Mao, Mark Z. Improving the speed of neural networks on cpus. In Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2011. Yang, Zichao, Moczulski, Marcin, Denil, Misha, de Freitas, Nando, Smola, Alex, Song, Le, and Wang, Ziyu. Deep fried convnets. arXiv preprint arXiv:1412.7149, 2014. | 1510.00149#46 | 1510.00149#48 | 1510.00149 | [
"1504.08083"
] |
1510.00149#48 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | 13 Published as a conference paper at ICLR 2016 A APPENDIX: DETAILED TIMING / POWER REPORTS OF DENSE & SPARSE NETWORK LAYERS Table 8: Average time on different layers. To avoid variance, we measured the time spent on each layer for 4096 input samples, and averaged the time regarding each input sample. For GPU, the time consumed by cudaMalloc and cudaMemcpy is not counted. For batch size = 1, gemv is used; For batch size = 64, gemm is used. For sparse case, csrmv and csrmm is used, respectively. | 1510.00149#47 | 1510.00149#49 | 1510.00149 | [
"1504.08083"
] |
1510.00149#49 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Time (us) Titan X Core i7-5930k Tegra K1 dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) AlexNet FC6 541.5 134.8 19.8 94.6 7516.2 3066.5 318.4 1417.6 12437.2 2879.3 1663.6 4003.9 AlexNet FC7 243.0 65.8 8.9 51.5 6187.1 1282.1 188.9 682.1 5765.0 1256.5 2056.8 1372.8 AlexNet FC8 80.5 54.6 5.9 23.2 1134.9 890.5 45.8 407.7 2252.1 837.0 298.0 576.7 VGG16 FC6 1467.8 167.0 53.6 121.5 35022.8 3774.3 1056.0 1780.3 35427.0 4377.2 2001.4 8024.8 VGG16 FC7 243.0 39.8 8.9 24.4 5372.8 545.1 188.3 274.9 5544.3 626.3 2050.7 660.2 Table 9: Power consumption of different layers. We measured the Titan X GPU power with nvidia-smi, Core i7-5930k CPU power with pcm-power and Tegra K1 mobile GPU power with an external power meter (scaled to AP+DRAM, see paper discussion). During power measurement, we repeated each computation multiple times in order to get stable numbers. On CPU, dense matrix multiplications consume 2x energy than sparse ones because it is accelerated with multi-threading. | 1510.00149#48 | 1510.00149#50 | 1510.00149 | [
"1504.08083"
] |
1510.00149#50 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Power (Watts) TitanX Core i7-5930k Tegra K1 dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) AlexNet FC6 157 181 168 156 83.5 42.3 85.4 37.2 5.1 5.9 5.6 5.0 AlexNet FC7 159 183 173 158 72.8 37.4 84.7 37.1 5.1 6.1 5.6 4.6 AlexNet FC8 159 162 166 163 77.6 36.5 101.6 38 5.4 5.8 6.3 5.1 VGG16 FC6 166 189 173 160 70.6 38.0 83.1 39.5 5.3 5.6 5.4 4.8 VGG16 FC7 163 166 173 158 74.6 37.4 97.1 36.6 5.3 6.3 5.6 4.7 14 VGG16 FC8 80.5 48.0 5.9 22.0 774.2 777.3 45.7 363.1 2243.1 745.1 483.9 544.1 VGG16 FC8 159 162 167 161 77.0 36.0 87.5 38.2 5.4 5.8 6.3 5.0 | 1510.00149#49 | 1510.00149 | [
"1504.08083"
] |
|
1509.03005#0 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 5 1 0 2 p e S 0 1 ] G L . s c [ 1 v 5 0 0 3 0 . 9 0 5 1 : v i X r a # Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies David Balduzzi School of Mathematics and Statistics Victoria University of Wellington Wellington, New Zealand [email protected] Muhammad Ghifary School of Engineering and Computer Science Victoria University of Wellington Wellington, New Zealand [email protected] # Abstract This paper proposes GProp, a deep reinforcement learning algorithm for continuous poli- cies with compatible function approximation. The algorithm is based on two innovations. | 1509.03005#1 | 1509.03005 | [
"1502.02251"
] |
|
1509.03005#1 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Firstly, we present a temporal-diï¬ erence based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which com- prises three neural networks that estimate the value function, its gradient, and determine the actorâ s policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforce- ment learning algorithms to accurately estimate gradients; and the octopus arm, a challeng- ing reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm. Keywords: policy gradient, reinforcement learning, deep learning, gradient estimation, temporal diï¬ erence learning # 1. Introduction | 1509.03005#0 | 1509.03005#2 | 1509.03005 | [
"1502.02251"
] |
1509.03005#2 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | In reinforcement learning, an agent learns to maximize its discounted future rewards (Sutton and Barto, 1998). The structure of the environment is initially unknown, so the agent must both learn the rewards associated with various action-sequence pairs and optimize its policy. A natural approach is to tackle the subproblems separately via a critic and an actor (Barto et al., 1983; Konda and Tsitsiklis, 2000), where the critic estimates the value of diï¬ erent actions and the actor maximizes rewards by following the policy gradient (Sutton et al., 1999; Peters and Schaal, 2006; Silver et al., 2014). Policy gradient methods have proven useful in settings with high-dimensional continuous action spaces, especially when task- relevant policy representations are at hand (Deisenroth et al., 2011; Levine et al., 2015; Wahlstr¨om et al., 2015). In the supervised setting, representation or deep learning algorithms have recently demonstrated remarkable performance on a range of benchmark problems. | 1509.03005#1 | 1509.03005#3 | 1509.03005 | [
"1502.02251"
] |
1509.03005#3 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | However, the problem of 1 Balduzzi and Ghifary learning features for reinforcement learning remains comparatively underdeveloped. The most dramatic recent success uses Q-learning over ï¬ nite action spaces, and essentially build a neural network critic (Mnih et al., 2015). Here, we consider continuous action spaces, and develop an algorithm that simultaneously learns the value function and its gradient, which it then uses to ï¬ nd the optimal policy. # 1.1 Outline This paper presents Value-Gradient Backpropagation (GProp), a deep actor-critic algorithm for continuous action spaces with compatible function approximation. Our starting point is the deterministic policy gradient and associated compatibility conditions derived in (Silver et al., 2014). Roughly speaking, the compatibility conditions are that C1. the critic approximate the gradient of the value-function and C2. the approximation is closely related to the gradient of the policy. See Theorem 2 for details. We identify and solve two problems with prior work on policy gradients â relating to the two compatibility conditions: P1. Temporal diï¬ | 1509.03005#2 | 1509.03005#4 | 1509.03005 | [
"1502.02251"
] |
1509.03005#4 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | erence methods do not directly estimate the gradient of the value function. Instead, temporal diï¬ erence methods are applied to learn an approximation of the form Qv(s) + Qw(s, a), where Qv(s) estimates the value of a state, given the current policy, and Qw(s, a) estimates the advantage from deviating from the current policy (Sutton et al., 1999; Peters and Schaal, 2006; Deisenroth et al., 2011; Silver et al., 2014). Although the advantage is related to the gradient of the value function, it is not the same thing. | 1509.03005#3 | 1509.03005#5 | 1509.03005 | [
"1502.02251"
] |
1509.03005#5 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | P2. The representations used for compatible approximation scale badly on neural networks. The second problem is that prior work has restricted to advantage functions constructed from a particular state-action representation, Ï (s, a) = â θ µθ(s)(a â µθ(s)), that de- pends on the gradient of the policy. The representation is easy to handle for linear policies. However, if the policy is a neural network, then the standard state-action representation ties the critic too closely to the actor and depends on the internal struc- ture of the actor, Example 2. As a result, weight updates cannot be performed by backpropagation, see section 5.5. | 1509.03005#4 | 1509.03005#6 | 1509.03005 | [
"1502.02251"
] |
1509.03005#6 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | The paper makes three novel contributions. The ï¬ rst two contributions relate directly to problems P1 and P2. The third is a new task designed to test the accuracy of gradient estimates. Method to directly learn the gradient of the value function. The ï¬ rst contribution is to modify temporal diï¬ erence learning so that it directly estimates the gradient of the value-function. The gradient perturbation trick, Lemma 3, provides a way to simultaneously estimate both the value of a function at a point and its gradient, by perturbing the functionâ s input with uncorrelated Gaussian noise. Plugging in a neural network instead of a linear estimator extends the trick to the problem of learning a function and its gradient over the entire state-action space. Moreover, the trick combines naturally with temporal diï¬ erence methods, Theorem 5, and is therefore well-suited to applications in reinforcement learning. | 1509.03005#5 | 1509.03005#7 | 1509.03005 | [
"1502.02251"
] |
1509.03005#7 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 2 Compatible Value Gradients for Deep Reinforcement Learning Deviator-Actor-Critic (DAC) model with compatible function approximation. The second contribution is to propose the Deviator-Actor-Critic (DAC) model, Deï¬ nition 2, consisting in three coupled neural networks and Value-Gradient Backpropagation (GProp), Algorithm 1, which backpropagates three diï¬ erent signals to train the three networks. The main result, Theorem 6, is that GProp has compatible function approximation when im- plemented on the DAC model when the neural network consists in linear and rectilinear units.1 The proof relies on decomposing the Actor-network into individual units that are con- sidered as actors in their own right, based on ideas in (Srivastava et al., 2014; Balduzzi, 2015). It also suggests interesting connections to work on structural credit assignment in multiagent reinforcement learning (Agogino and Tumer, 2004, 2008; HolmesParker et al., 2014). Contextual bandit task to probe the accuracy of gradient estimates. A third contribution, that may be of independent interest, is a new contextual bandit setting de- signed to probe the ability of reinforcement learning algorithms to estimate gradients. A supervised-to-contextual bandit transform was proposed in (Dud´ık et al., 2014) as a method for turning classiï¬ cation datasets into K-armed contextual bandit datasets. We are interested in the continuous setting in this paper. We therefore adapt their transform with a twist. The SARCOS and Barrett datasets from robotics have features corresponding to the positions, velocities and accelerations of seven joints and labels corre- sponding to their torques. There are 7 joints in both cases, so the feature and label spaces are 21 and 7 dimensional respectively. The datasets are traditionally used as regression benchmarks labeled SARCOS1 through SARCOS7 where the task is to predict the torque of a single joint â | 1509.03005#6 | 1509.03005#8 | 1509.03005 | [
"1502.02251"
] |
1509.03005#8 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | and similarly for Barrett. We convert the two datasets into two continuous contextual bandit tasks where the reward signal is the negative distance to the correct label 7-dimensional. The algorithm is thus â toldâ that the label lies on a sphere in a 7-dimensional space. The missing information required to pin down the labelâ s position is precisely the gradient. For an algorithm to make predictions that are competitive with fully supervised methods, it is necessary to ï¬ nd extremely accurate gradient estimates. Experiments. Section 6 evaluates the performance of GProp on the contextual bandit problems described above and on the challenging octopus arm task (Engel et al., 2005). We show that GProp is able to simultaneously solve seven nonparametric regression prob- lems without observing any labels â instead using the distance between its actions and the correct labels. It turns out that GProp is competitive with recent fully supervised learning algorithms on the task. Finally, we evaluate GProp on the octopus arm benchmark, where it achieves the best performance reported to date. | 1509.03005#7 | 1509.03005#9 | 1509.03005 | [
"1502.02251"
] |
1509.03005#9 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 1. The proof also holds for maxpooling, weight-tying and other features of convnets. A description of how closely related results extend to convnets is provided in (Balduzzi, 2015). 3 Balduzzi and Ghifary # 1.2 Related work An early reinforcement learning algorithm for neural networks is REINFORCE (Williams, 1992). A disadvantage of REINFORCE is that the entire network is trained with a single scalar signal. Our proposal builds on ideas introduced with deep Q-learning (Mnih et al., 2015), such as replay. However, deep Q-learning is restricted to ï¬ nite action spaces, whereas we are concerned with continuous action spaces. Policy gradients were introduced in (Sutton et al., 1999) and have been used extensively (Kakade, 2001; Peters and Schaal, 2006; Deisenroth et al., 2011). The deterministic policy gradient was introduced in (Silver et al., 2014), which also proposed the algorithm COPDAC-Q. The relationship between GProp and COPDAC-Q is discussed in detail in section 5.5. An alternate approach, based on the idea of backpropagating the gradient of the value function, is developed in (Jordan and Jacobs, 1990; Prokhorov and Wunsch, 1997; Wang and Si, 2001; Hafner and Riedmiller, 2011; Fairbank and Alonso, 2012; Fairbank et al., 2013). Unfortunately, these algorithms do not have compatible function approximation in general, so there are no guarantees on actor-critic interactions. See section 5.5 for further discussion. The analysis used to prove compatible function approximation relies on decomposing the Actor neural network into a collection of agents corresponding to the units in the network. The relation between GProp and the diï¬ erence-based objective proposed for multiagent learning (Agogino and Tumer, 2008; HolmesParker et al., 2014) is discussed in section 5.4. # 1.3 Notation We use boldface to denote vectors, subscripts for time, and superscripts for individual units in a network. Sets of parameters are capitalized (Î , W, V) when they refer to matrices or to the parameters of neural networks. # 2. | 1509.03005#8 | 1509.03005#10 | 1509.03005 | [
"1502.02251"
] |
1509.03005#10 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Deterministic Policy Gradients This section recalls previous work on policy gradients. The basic idea is to simultaneously train an actor and a critic. The critic learns an estimate of the value of diï¬ erent policies; the actor then follows the gradient of the value-function to ï¬ nd an optimal (or locally optimal) policy in terms of expected rewards. # 2.1 The Policy Gradient Theorem The environment is modeled as a Markov Decision Process consisting of state space S C Râ ¢, action space A C R%, initial distribution p(s) on states, stationary transition distribution P(St+1|Sz,a,) and reward function r: S x AR. A policy is a function pg: S > A from states to actions. We will often add noise to policies, causing them to be stochastic. In this case, the policy is a function ty: S > Ay, where A, is the set of probability distributions on actions. Let p(s > sâ ,y2) denote the distribution on states sâ at time ¢ given policy w and initial state s at ¢ = 0 and let pÂ¥(sâ ) = J fo7'pi(s)pi(s > sâ ,p)ds. Let rf = 4 | 1509.03005#9 | 1509.03005#11 | 1509.03005 | [
"1502.02251"
] |
1509.03005#11 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | # Compatible Value Gradients for Deep Reinforcement Learning Ï =t Î³Ï â tr(sÏ , aÏ ) be the discounted future reward. Deï¬ ne the value of a state-action pair: value of a policy: Qµθ (s, a) = E[rγ J(µθ) = 1 |S1 = s, A1 = a; µθ] E sâ ¼Ï Âµ,a⠼µθ [Qµθ (s, a)]. and The aim is to ï¬ nd the policy θâ := argmaxθ J(µθ) with maximal value. A natural ap- proach is to follow the gradient (Sutton et al., 1999), which in the deterministic case can be computed explicitly as Theorem 1 (policy gradient) Under reasonable assumptions on the regularity of the Markov Decision Process the policy gradient can be computed as | 1509.03005#10 | 1509.03005#12 | 1509.03005 | [
"1502.02251"
] |
1509.03005#12 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | â θ J(µθ) = E sâ ¼Ï Âµ â θ µθ(s) â a Qµ(s, a)|a=µθ(s) . Proof See (Silver et al., 2014). # 2.2 Linear Compatible Function Approximation Since the agent does not have direct access to the value function Qµ, it must instead learn an estimate Qw â Qµ. A suï¬ cient condition for when plugging an estimate Qw(s, a) into the policy gradient â θ J(θ) = E[â θ µθ(s) â a Qµθ (s, a)|a=µθ(s)] yields an unbiased estimator was ï¬ | 1509.03005#11 | 1509.03005#13 | 1509.03005 | [
"1502.02251"
] |
1509.03005#13 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | rst proposed in (Sutton et al., 1999). A suï¬ cient condition in the deterministic setting is: Theorem 2 (compatible value function approximation) The value-estimate Qw(s, a) satisï¬ es is compatible with the policy gradient, that is â θ J(θ) = E â θ µθ(s) · â a Qw(s, a)|a=µθ(s) if the following conditions hold: # C1. Qw approximates the value gradient: The weights learned by the approximate value function must satisfy w = argmin,, â ¬or (9, where lar (0,w) = ly Qâ ¢(S, a) ja=py(s) â VY QO" (8, a)ja=puo(s) â | (a) is the mean-square diï¬ erence between the gradient of the true value function Qµ and the approximation Qw. # C2. Qw is policy-compatible: The gradients of the value-function and the policy must satisfy VQ" (8, a) ja=o(s) = ( Vv Hy(s),w). (2) 5 | | 1509.03005#12 | 1509.03005#14 | 1509.03005 | [
"1502.02251"
] |
1509.03005#14 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Balduzzi and Ghifary Proof See (Silver et al., 2014). Having stated the compatibility condition, it is worth revisiting the problems that we propose to tackle in the paper. The ï¬ rst problem is to directly estimate the gradient of the value function, as required by Eq. (1) in condition C1. The standard approach used in the literature is to estimate the value function, or the closely related advantage function, using temporal diï¬ erence learning, and then compute the derivative of the estimate. The next section shows how the gradient can be estimated directly. The second problem relates to the compatibility condition on policy and value gradients required by Eq. (2) in condition C2. The only function approximation satisfying C2 that has been proposed is Example 1 (standard value function approximation) Let $(s) be an m-dimensional feature representation on states and set o(s,a) := Vo p9(s)- (a - Ho(s)). Then the value function approximation QY(s,a) = (p(s,a),w) +($(8),v) = (aâ Hols)" V He(s)â ¢ w+ (s)T-v. eS â â 0 advantage function satisï¬ es condition C2 of Theorem 2. The approximation in Example 1 encounters serious problems when applied to deep policies, see discussion in section 5.5. # 3. Learning Value Gradients In this section, we tackle the ï¬ rst problem by modifying temporal-diï¬ erence (TD) learning so that it directly estimates the gradient of the value function. First, we developed a new approach to estimating the gradient of a black-box function at a point, based on perturbing the function with gaussian noise. It turns out that the approach extends easily to learning the gradient of a black-box function across its entire domain. Moreover, it is easy to combine with neural networks and temporal diï¬ erence learning. | 1509.03005#13 | 1509.03005#15 | 1509.03005 | [
"1502.02251"
] |
1509.03005#15 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | # 3.1 Estimating the gradient of an unknown function at a point Gradient estimates have been intensively studied in bandit problems, where rewards (or losses) are observed but labels are not. Thus, in contrast to supervised learning where it is possible to compute the gradient of the loss, in bandit problems the gradient must be estimated. More formally, consider the following setup. Deï¬ nition 1 (zeroth-order black-box) A function f : Rd â R is a zeroth-order black-box if it can only be queried for zeroth- order information. That is, User can request the value f (x) of f at any point x â Rd, but cannot request the gradient of the function. We use the shorthand black-box in what follows. | 1509.03005#14 | 1509.03005#16 | 1509.03005 | [
"1502.02251"
] |
1509.03005#16 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 6 | Compatible Value Gradients for Deep Reinforcement Learning The black-box model for optimization was introduced in (Nemirovski and Yudin, 1983), see (Raginsky and Rakhlin, 2011) for a recent exposition. In those papers, a black-box consists in a ï¬ rst-order oracle that can provide both zeroth-order information (the value of the function) and ï¬ rst-order information (the gradient or subgradient of the function). Remark 1 (reward function is a black-box; value function is not) The reward function r(s, a) is a black box since Nature does not provide gradient informa- tion. The value function Qµθ (s, a) = E[rγ 1 |S1 = s, A1 = a; µθ] is not even a black-box: it cannot be queried directly since it is deï¬ ned as the expected discounted future reward. It is for this reason the gradient perturbation trick must be combined with temporal diï¬ erence learning, see section 3.4. An important insight is that the gradient of an unknown function at a speciï¬ c point can be estimated by perturbing its input (Flaxman et al., 2005). For example, for small δ > 0 the gradient of f : Rd â R is approximately â f (x)|x=µ â d · Eu[ f (µ+δu) u] where the expectation is over vectors sampled uniformly from the unit sphere. The following lemma provides a simple method for estimating the gradient of a function at a point based on Gaussian perturbations: Lemma 3 (gradient perturbation trick) The gradient of diï¬ erentiable f : Rd â R at µ â Rd is 2 V fC) x= = lim argmin {min E ety) (rw +) â (w,e) â b) \ . (3) 020 werd | bER e~N( Proof By taking sufficiently small variance, we can assume that f is locally linear. Setting b = f(w) yields a line through the origin. It therefore suffices to consider the special case f(x) = (v,x). Setting | 1509.03005#15 | 1509.03005#17 | 1509.03005 | [
"1502.02251"
] |
1509.03005#17 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | w = aramin [5 (( 6) ly, 6) ; werd â ¬~N(0,0?-1q) |2 we are required to show that w* = v. The problem is convex, so setting the gradient to zero requires to solve 0 = E [(w â v,e)- el, which reduces to solving the set of linear equations # d d Yi(w" â v') Efeâ eâ ] = (w! â vw!) E[(e)?] = (w? â vo!) -0? =0 for all j. i=1 The first equality holds since Efeâ eâ ] = 0. It follows immediately that w* = v. # 3.2 Learning gradients across a range The solution to the optimization problem in Eq. (3) is the gradient â f (x) of f at a particular µ â Rd. The next step is to learn a function GW : Rd â Rd that approximates the gradient across a range of values. | 1509.03005#16 | 1509.03005#18 | 1509.03005 | [
"1502.02251"
] |
1509.03005#18 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 7 | # Balduzzi and Ghifary More precisely, given a sample {xi}n i=1 â ¼ PX of points, we aim to ï¬ nd n W* := argmin ) > [iIv ff (xi) â GW (x;)||"| . Ww il The next lemma considers the case where QY and GW are linear estimates, of the form QY (x) := ((x),v), and GW (x) = W- w(x) for fixed representations @ : X > Râ ¢ and w:X +Râ . Lemma 4 (gradient learning) Let f : R¢ > R be a differentiable function. Suppose that @ : X â Râ ¢ andy: X + Râ are representations such that there exists an m-vector v* and a (d x n)-matrix W* satisfying f(x) = (@(x), v*) and V f = W*- Y(x) for all x in the sample. If we deï¬ ne loss function â ¬ ((W,V,x,0) = (20 46) â (GW(x),)â arn) | then | 1509.03005#17 | 1509.03005#19 | 1509.03005 | [
"1502.02251"
] |
1509.03005#19 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | W* = lim argminmin E_ [e(W, V,x,o)]. e550 WwW VioxvP Proof Follows from Lemma 3. In short, the lemma reduces gradient estimation to a simple optimization problem given a good enough representation. Jumping ahead slightly to section 4, we ensure that our model has good enough representations by constructing two neural networks to learn them. The ï¬ rst neural network, QV : Rd â R, learns an approximation to f (x) that plays the role of the baseline b. The second neural network, GW : Rd â Rd learns an approximation to the gradient. # 3.3 Temporal diï¬ erence learning Recall that Qµ(s, a) is the expected value of a state-action pair given policy µ. It is never observed directly, since it is computed by discounting over future rewards. TD-learning is a popular approach to estimating Qµ through dynamic programming (Sutton and Barto, 1998). We quickly review TD-learning. | 1509.03005#18 | 1509.03005#20 | 1509.03005 | [
"1502.02251"
] |
1509.03005#20 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Let Ï : S à A â Rm be a ï¬ xed representation. The goal is to ï¬ nd a value-estimate Q"(s.a) := (6(s.a),v), where v is an m-dimensional vector, that is as close as possible to the true value function. If the value-function were known, we could simply minimize the mean-square error with respect to v: fuse(v) = [(ors.a) - a%(e.a))']. (s,a)~(o" 1) 8 # a | 1509.03005#19 | 1509.03005#21 | 1509.03005 | [
"1502.02251"
] |
1509.03005#21 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | # Compatible Value Gradients for Deep Reinforcement Learning Unfortunately, it is impossible to minimize the mean-square error directly, since the value- function is the expected discounted future reward, rather than the reward. That is, the value function is not provided explicitly by the environment â not even as a black-box. The Bellman error is therefore used a substitute for the mean-square error: TD-error, 6 r â â â â â â â .. lowly) = Ey (ra) +12") m(s))) -2"(s.a) ) sa)~(ph, â __â <$<â â =QH(s,a) 2 | where sâ is the state subsequent to s. Let δt = rt â Qv(st, at) + γQv(st+1, µθ(st+1)) be the TD-error. TD-learning updates v according to vt+1 â vt + ηt · δt · â v Qv(st, at) = vt + ηt · δt · Ï (s, a), (4) where ηt is a sequence of learning rates. The convergence properties of TD-learning and related algorithms have been studied extensively, see (Tsitsiklis and Roy, 1997; Dann et al., 2014). # 3.4 Temporal diï¬ erence gradient (TDG) learning Finally, we apply temporal diï¬ erence methods to estimate the gradient 2 of the value func- tion, as required by condition C1 of Theorem 2. We are interested in gradient approxima- tions of the form | 1509.03005#20 | 1509.03005#22 | 1509.03005 | [
"1502.02251"
] |
1509.03005#22 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | QW (s, a, â ¬) = (G(s,a),â ¬) = (W- Y(s,a),â ¬), where wy : S x A â Râ and W is a (d x n)-dimensional matrix. The goal is to find W* such that GW" (s,a) & Ve QH(s, a, â ¬)\e-0 = Va Q4(s, a) |a=y4(s) for all sampled state-action pairs. It is convenient to introduce notation QH(s,a,e) := Q4(s,a+e) and shorthand s := (S, 4o(s)). Then, analogously to the mean-square, define the perturbed gradient error: trcelw, Wie?) = 8,8 |(Q"@e)â (Gâ ¢@).â ¬)- Q°@) |. | 1509.03005#21 | 1509.03005#23 | 1509.03005 | [
"1502.02251"
] |
1509.03005#23 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Given a good enough representation, Lemma 4 guarantees that minimizing the perturbed gradient error yields the gradient of the value function. Unfortunately, as discussed above, the value function cannot be queried directly. We therefore introduce the Bellman gradient error as a proxy TDG-error, â ¬ .ODwwâ ¢ v. Wo?) = 8, 8 [(76.6) +7@°@)â (G%).«)â 9) =QH(,â ¬) | . 2. Residual gradient (RG) and gradient temporal diï¬ erence (GTD) methods were introduced in (Baird, 1995; Sutton et al., 2009a,b). | 1509.03005#22 | 1509.03005#24 | 1509.03005 | [
"1502.02251"
] |
1509.03005#24 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | The similar names may be confusing. RG and GTD methods are TD methods derived from gradient descent. In contrast, we develop a TD-based approach to learning gra- dients. The two approaches are thus complementary and straightforward to combine. However, in this paper we restrict to extending vanilla TD to learning gradients. 9 # Balduzzi and Ghifary Set the TDG-error as & = r(Se) + Â¥Q* Bi41) â (GY Gr), â ¬) â QY (&) and, analogously to Eq. (4), deï¬ ne the TDG-updates Vi4t â Vet me: &- VQ (61) =vitm-&- (8) Wisi â We + me te - VQ i) = Wi +m: &-â ¬ @ W(), where â ¬ @ (8) is the (d x n) matrix given by the outer product. We refer to â ¬- â ¬ as the perturbed TDG-error. The following extension theorem allows us to import guarantees from temporal-diï¬ erence learning to temporal-diï¬ erence gradient learning. Theorem 5 (zeroth to ï¬ rst-order extension) Guarantees on TD-learning extend to TDG-learning. The idea is to reformulate TDG-learning as TD-learning, with a slightly diï¬ erent reward function and function approximation. Since the function approximation is still linear, any guarantees on convergence for TD-learning transfered automatically to TDG-learning. Proof First, we incorporate â ¬ into the state-action pair. Define 7(s,a,â ¬) := r(s,a+e) and ab(s,a,â ¬) =â ¬ @ w(s, a). | 1509.03005#23 | 1509.03005#25 | 1509.03005 | [
"1502.02251"
] |
1509.03005#25 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Second, we define a dot product on matrices of equal size by flattening them down to vectors. More precisely, given two matrices A and B of the same dimension (m x n), define the dot-product (A,B) = et Aj; Bij. It is easy to see that GW(s,a) = (W- o(s,a), â ¬) = (h(s,a,â ¬), W). The TDG-error can then be rewritten as & =F(s,a,â ¬) +7QY(s', aâ , â ¬â ) â QV (s,a,â ¬) where QY'W(s, a, â ¬) = (4(s, a), v) + (W(s, a, â ¬), W) is a linear function approximation. If we are in a setting where TD-learning is guaranteed to converge to the value-function, it follows that TDG-learning is also guaranteed to converge â since it is simply a differ- ent linear approximation. Thus, Qâ (8,â ¬) ~ QY(8) + GW(8,e) and the result follows by Lemma 4. a # 4. Algorithm: Value-Gradient Backpropagation This section presents our model, which consists of three coupled neural networks that learn to estimate the value function, its gradient, and the optimal policy respectively. | 1509.03005#24 | 1509.03005#26 | 1509.03005 | [
"1502.02251"
] |
1509.03005#26 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 10 # Compatible Value Gradients for Deep Reinforcement Learning Deï¬ nition 2 (deviator-actor-critic) The deviator-actor-critic (DAC) model consists in three neural networks: actor-network with policy µΠ: S â A â Rd; â ¢ critic-network, QV : S à A â R, that estimates the value function; and â ¢ deviator-network, GW : S à A â Rd, that estimates the gradient of the value function. Gaussian noise is added to the policy during training resulting in actions a = fe(s) + â | 1509.03005#25 | 1509.03005#27 | 1509.03005 | [
"1502.02251"
] |
1509.03005#27 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | ¬ where â ¬ ~ N(0,07-1q). The outputs of the critic and deviator are combined as (5 als).6) = 0% (otal) + (@â ¢ (5 t0(8):¢) The Gaussian noise plays two roles. Firstly, it controls the explore/exploit tradeoï¬ by controlling the extent to which Actor deviates from its current optimal policy. Secondly, it controls the â resolutionâ at which Deviator estimates the gradient. The three networks are trained by backpropagating three diï¬ erent signals. Critic, De- viator and Actor backpropagate the TDG-error, the perturbed TDG-error, and Deviatorâ s gradient estimate respectively; see Algorithm 1. An explicit description of the weight up- dates of individual units is provided in Appendix A. Deviator estimates the gradient of the value-function with respect to deviations â ¬ from the current policy. Backpropagating the gradient through Actor allows to estimate the influence of Actor-parameters on the value function as a function of their effect on the policy. | 1509.03005#26 | 1509.03005#28 | 1509.03005 | [
"1502.02251"
] |
1509.03005#28 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Algorithm 1: Value-Gradient Backpropagation (GProp). for rounds t = 1, 2, . . . , T do rounds t= 1,2,...,T do Network gets state s;, responds a; = He, (sr) + â ¬, gets reward r; Let § := (s, 9(s)). & â re + YQY* (S41) â QY*(&) â (GW*(&), â ¬) // compute TDG-error Or41 â ©: + nf! - Vo Me, (st) - GY (8) // backpropagate GW View â Vit nf -& Vv QY'(&) // backpropagate â ¬ Wii â Wit nb -&- Vw GW (8;) -â ¬ // backpropagate â ¬-â ¬ | 1509.03005#27 | 1509.03005#29 | 1509.03005 | [
"1502.02251"
] |
1509.03005#29 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Critic and Deviator learn representations suited to estimating the value function and its gradient respectively. Note that even though the gradient is a linear function at a point, it can be a highly nonlinear function in general. Similarly, Actor learns a policy representation. We set the learning rates of Critic and Deviator to be equal (nâ ¬ = np ) in the experiments in section 6. However, the perturbation â ¬ has the effect of slowing down and stabilizing Deviator updates: Remark 2 (stability) The magnitude of Deviatorâ s weight updates depend on â ¬ ~ N(0,0? -Iy) since they are computed by backpropagating the perturbed TDG-error &-â ¬. Thus as 0? + 0, Deviatorâ s learning rate essentially tends to zero. In general, Deviator learns more slowly than Critic. | 1509.03005#28 | 1509.03005#30 | 1509.03005 | [
"1502.02251"
] |
1509.03005#30 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 11 Balduzzi and Ghifary This has a stabilizing eï¬ ect on the policy since Actor is insulated from Critic â its weight updates only depend (directly) on the output of Deviator. # 5. Analysis: Deep Compatible Function Approximation Our main result is that the deviatorâ s value gradient is compatible with the policy gradient of each unit in the actor-network â considered as an actor in its own right: Theorem 6 (deep compatible function approximation) Suppose that all units are rectilinear or linear. Then for each Actor-unit in the Actor- network there exists a reparametrization of the value-gradient approximator, GW, that sat- isï¬ es the compatibility conditions in Theorem 2. The actor-network is thus a collection of interdependent agents that individually fol- low the correct policy gradients. The experiments below show that they also collectively converge on useful behaviors. Overview of the proof. The next few subsections prove Theorem 6. We provide a brief overview before diving into the details. Guarantees for temporal diï¬ | 1509.03005#29 | 1509.03005#31 | 1509.03005 | [
"1502.02251"
] |
1509.03005#31 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | erence learning and policy gradients are typically based on the assumption that the value-function approximation is a linear function of the learned parameters. However, we are interested in the case where Actor, Critic and Deviator are all neural networks, and are therefore highly nonlinear functions of their parameters. The goal is thus to relate the representations learned by neural networks to the prior work on linear function approximations. To do so, we build on the following observation, implicit in (Srivastava et al., 2014): Remark 3 (active submodels) A neural network of n linear and rectilinear units can be considered as a set of 2n submodels, corresponding to diï¬ erent subsets of units. The active submodel at time t consists in the active units (that is, the linear units and the rectiï¬ ers that do not output 0). The active submodel has two important properties: | 1509.03005#30 | 1509.03005#32 | 1509.03005 | [
"1502.02251"
] |
1509.03005#32 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | â ¢ it is a linear function from inputs to outputs, since rectiï¬ ers are linear when active, and â ¢ at each time step, learning only occurs over the active submodels, since only active units update their weights. The feedforward sweep of a rectiï¬ er network can thus be disentangled into two steps (Bal- duzzi, 2015). The ï¬ rst step, which is highly nonlinear, applies a gating operation that selects the active submodel â by rendering various units inactive. The second step computes the output of the neural network via matrix multiplication. It is important to emphasize that although the active submodel is a linear function from inputs to outputs, it is not a linear function of the weights. The strategy of the proof is to decompose the Actor-network in an interacting collection of agents, referred to as Actor-units. That is, we model each unit in the Actor-network as | 1509.03005#31 | 1509.03005#33 | 1509.03005 | [
"1502.02251"
] |
1509.03005#33 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 12 Compatible Value Gradients for Deep Reinforcement Learning an Actor in its own right that. On each time step that an Actor-unit is active, it interacts with the Deviator-submodel corresponding to the current active submodel of the Deviator- network. The proof shows that each Actor-unit has compatible function approximation. # 5.1 Error backpropagation on rectilinear neural networks First, we recall some basic facts about backpropagation in the case of rectilinear units. Recent work has shown that replacing sigmoid functions with rectiï¬ ers S(x) = max(0, x) improves the performance of neural networks (Nair and Hinton, 2010; Glorot et al., 2011; Zeiler et al., 2013; Dahl et al., 2013). Let us establish some notation. The output of a rectiï¬ er with weight vector w is Sw(x) := S((w,x)) := max(0, (w, x)). The rectifier is active if (w,x) > 0. We use rectifiers because they perform well in prac- tice and have the nice property that units are linear when they are active. The rectifier subgradient is the indicator function 1(x) := â S(x) = 1 x > 0 else. 0 Consider a neural network of n units, each equipped with a weight vector w/ â ¬ Hj; C R4. Hidden units are rectifiers; output units are linear. There are n units in total. It is convenient to combine all the weight vectors into a single object; let WC H = ITj- 1H; C RY where N = via d;. The network is a function FW Râ ¢ | R¢: xi FW (xin) =: Xout- . The network has error function â ¬(xXout, y) with gradient g = Vx,,, â ¬. Let xâ denote the output of unit 7 and @/ (xin) = (2") fisi-+j} denote its input, so that a7 = S((w/, @) (xin). Note that ¢? depends on W (specifically, the weights of lower units) but this is supressed from the notation. | 1509.03005#32 | 1509.03005#34 | 1509.03005 | [
"1502.02251"
] |
1509.03005#34 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Deï¬ nition 3 (inï¬ uence) The inï¬ uence of unit j on unit k at time t is Ï j,k inï¬ uence of unit j on the output layer is the vector Ï j . ark The influence of unit j on unit k at time t is atk = ot (Balduzzi et al., 2015). The . iam influence of unit j on the output layer is the vector 7m, = (72°) peout' The following lemma summarizes an analysis of the feedforward and feedback sweep of neural nets. Lemma 7 (structure of neural network gradients) The following properties hold # a. Inï¬ uence. A path is active at time t if all units on the path are ï¬ ring. The inï¬ uence of j on k is the sum of products of weights over all active paths from j to k: alk = > wre de > wren? vee > wrkgk {alja} {Blaâ +B} {wlwk} where α, β, . . . , Ï refer to units along the path from j to k. | 1509.03005#33 | 1509.03005#35 | 1509.03005 | [
"1502.02251"
] |
1509.03005#35 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 13 Balduzzi and Ghifary # b. Output decomposition. The output of a neural network decomposes, relative to the output of unit j, as F W(xin) = Ï j · xj + Ï â j · xin, where Ï â j is the (m à d)-matrix whose (ik)th entry is the sum over all active paths from input unit i to output unit k that do not intersect unit j. # c. Output gradient. Fix an input Xin â ¬ Râ ¢ and consider the network as a function from parameters to outputs F*(Xin) > H > R¢: Ws FW (xin) whose gradient is an (N xd)-matrix. The (ij)"*-entry of the gradient is the input to the unit times its influence: (vwFW (xin)) 2] _ 9 (Xin) a if unit j is active 0 else. # d. Backpropagated error. Fix xin â ¬ Râ ¢â | 1509.03005#34 | 1509.03005#36 | 1509.03005 | [
"1502.02251"
] |
1509.03005#36 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | and consider the function E(W) = E(F*(xin),y) : H > R: We E(EW (xin), y). Let g = Vou E(Xouts Â¥)- The gradient of the error function is (Vw), = (8: (Vw (xin)) ;;) = 8) (VwFâ ¢ (xin) ;; = 5! - 6! (Xin) where the backpropagated error signal 5! received by unit j decomposes as 5) = (g, ww). Proof Direct computation. The lemma holds generically for networks of rectiï¬ er and linear units. We apply it to actor, critic and deviator networks below. # 5.2 A minimal DAC model This subsection proves condition C1 of compatible function approximation for a minimal, linear Deviator-Actor-Critic model. The next subsection shows how the minimal model arises at the level of Actor-units. Definition 4 (minimal model) The minimal model of a Deviator-Actor-Critic consists in an Actor with linear policy Lio(s) = (0,0(s)) + â ¬, where 6 is an m-vector and â ¬ is a noisy scalar. The Critic and Deviator together output: Qâ â (s, n9(s), â ¬) = QY(s) + GY (wW9(s), ©) = (P(S), v) + Ho(s) - (e,w), â â â â | 1509.03005#35 | 1509.03005#37 | 1509.03005 | [
"1502.02251"
] |
1509.03005#37 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | S Critic Deviator where v is an m-vector, w is a scalar, and (e,w) is simply scalar multiplication. 14 # a Compatible Value Gradients for Deep Reinforcement Learning The Critic in the minimal model is standard. However, the Deviator has been reduced it learns a single scalar parameter, w, that is used to train the actor. to almost nothing: The minimal model is thus too simple to be much use as a standalone algorithm. Lemma 8 (compatible function approximation for the minimal model) There exists a reparametrization of the gradient estimate of the minimal model G(s, ⠬) = G" (9(s), ⠬) such that compatibility condition C1 in Theorem 2 is satisifed: VG (8.6) = (Y Hols). ¥). Proof Let w :=w- 67 and construct G(s, ⠬) := (w- @(s),⠬). Clearly, G (5,6) = (w- 8" G(8),⠬) = p(s) + (w,⠬) = G"(uo(s)-0) Observe that V. G¥(s, ⠬) = w- ye(s) and that, similarly, (VY o(s), W) = w- po(s) (VY o(s), W) = w- po(s) | 1509.03005#36 | 1509.03005#38 | 1509.03005 | [
"1502.02251"
] |
1509.03005#38 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | as required. # 5.3 Proof of Theorem 6 The proof proceeds by showing that the compatibility conditions in Theorem 2 hold for each Actor-unit. The key step is to relate the Actor-units to the minimal model introduced above. Lemma 9 (reduction to minimal model) Actor-units in a DAC neural network are equivalent to minimal model Actors. Proof Let Ï j time t. When unit j is active, Lemma 7ab implies we can write µΠt(st) = Ï j µΠâ j the Actor-network that do not intersect unit j. Following Remark 3, the active subnetwork of the Deviator-network at time t is a linear transform which, by abuse of notation, we denote by W}. Combine the last two points to obtain GW(8,) = Wy (1 - (0, 61 (51)) + Ho (S1)) = (W}.- 7?) - (0, 6 (s:)) + terms that can be omitted. Observe that (W; - n) is a d-vector. We have therefore reduced Actor-unit jâ s interaction with the Deviator-network to d copies of the minimal model. a Theorem 6 follows from combining the above Lemmas. | 1509.03005#37 | 1509.03005#39 | 1509.03005 | [
"1502.02251"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.