doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1510.01378 | 7 | ââ h tâ1 + ââ h t+1 + ââ h t],
ââ Wxxt), ââ Wxxt),
where [x : y] denotes the concatenation of x and y. Finally, we can stack RNNs by using h as the input to another RNN, creating deeper architectures [13]
hl t = Ï(Whhl tâ1 + Wxhlâ1 t ). (11)
In vanilla RNNs, the activation function Ï is usually a sigmoid function, such as the hyperbolic tangent. Training such networks is known to be particularly difï¬cult, because of vanishing and exploding gradients [14].
# 3.1 Long Short-Term Memory
A commonly used recurrent structure is the Long Short-Term Memory (LSTM). It addresses the vanishing gradient problem commonly found in vanilla RNNs by incorporating gating functions into its state dynamics [6]. At each time step, an LSTM maintains a hidden vector h and a cell vector c responsible for controlling state updates and outputs. More concretely, we deï¬ne the computation at time step t as follows [15]: | 1510.01378#7 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 8 | i, = sigmoid(W);hy_1 + Waix:) f, = sigmoid(W), phy_1 + Wi fXt) ce, =f, Oc_1 +i; © tanh(W),-hy_1 + WicX:) 0, = sigmoid(W),ohy_1 + WheXt + Weotr) hy = o% © tanh(c;)
i, = sigmoid(W);hy_1 + Waix:) (12)
f, = sigmoid(W), phy_1 + Wi fXt) (13)
ce, =f, Oc_1 +i; © tanh(W),-hy_1 + WicX:) (14)
= sigmoid(W),ohy_1 + WheXt + Weotr) (15)
hy = o% © tanh(c;) (16)
where sigmoid(·) is the logistic sigmoid function, tanh is the hyperbolic tangent function, Wh· are the recurrent weight matrices and Wx· are the input-to-hiddent weight matrices. it, ft and ot are respectively the input, forget and output gates, and ct is the cell.
# 4 Batch Normalization for RNNs | 1510.01378#8 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 9 | # 4 Batch Normalization for RNNs
From equation 6, an analogous way to apply batch normalization to an RNN would be as follows:
ht = Ï(BN (Whhtâ1 + Wxxt)). (17)
However, in our experiments, when batch normalization was applied in this fashion, it didnât help the training procedure (see appendix A for more details). Instead we propose to apply batch normal- ization only to the input-to-hidden transition (Wxxt), i.e. as follows:
ht = Ï(Whhtâ1 + BN (Wxxt)). (18)
This idea is similar to the way dropout [16] can be applied to RNNs [17]: batch normalization is applied only on the vertical connections (i.e. from one layer to another) and not on the horizontal connections (i.e. within the recurrent layer). We use the same principle for LSTMs: batch normal- ization is only applied after multiplication with the input-to-hidden weight matrices Wx·.
3
(12) (13) (14) (15) (16)
Model Train Dev BiRNN BiRNN (BN) FCE FER FCE FER 0.33 0.95 0.73 0.34 0.28 0.22 1.11 1.19 | 1510.01378#9 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 10 | Table 1: Best framewise cross entropy (FCE) and frame error rate (FER) on the training and devel- opment sets for both networks.
# 4.1 Frame-wise and Sequence-wise Normalization
In experiments where we donât have access to the future frames, like in language modelling where the goal is to predict the next character, we are forced to compute the normalization a each time step
Xkt â Xkyt Von te (19) Xkt =
Weâll refer to this as frame-wise normalization.
In applications like speech recognition, we usually have access to the entire sequences. However, those sequences may have variable length. Usually, when using mini-batches, the smaller sequences are padded with zeroes to match the size of the longest sequence of the mini-batch. In such setups we canât use frame-wise normalization, because the number of unpadded frames decreases along the time axis, leading to increasingly poorer statistics estimates. To solve this problem, we apply a sequence-wise normalization, where we compute the mean and variance of each feature along both the time and batch axis using
m T 1 Xe =o > So Xie: (20) i=1 t=1
rat oR = = Dei â Re)â, (21) i=1 t=1 | 1510.01378#10 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 11 | rat oR = = Dei â Re)â, (21) i=1 t=1
where T is the length of each sequence and n is the total number of unpadded frames in the mini- batch. Weâll refer to this type of normalization as sequence-wise normalization.
# 5 Experiments
We ran experiments on a speech recognition task and a language modelling task. The models were implemented using Theano [18] and Blocks [19].
# 5.1 Speech Alignment Prediction
For the speech task, we used the Wall Street Journal (WSJ) [20] speech corpus. We used the si284 split as training set and evaluated our models on the dev93 development set. The raw audio was transformed into 40 dimensional log mel ï¬lter-banks (plus energy), with deltas and delta-deltas. As in [21], the forced alignments were generated from the Kaldi recipe tri4b, leading to 3546 clustered triphone states. Because of memory issues, we removed from the training set the sequences that were longer than 1300 frames (4.6% of the set), leading to a training set of 35746 sequences. | 1510.01378#11 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 12 | The baseline model (BL) is a stack of 5 bidirectional LSTM layers with 250 hidden units each, followed by a size 3546 softmax output layer. All the weights were initialized using the Glorot [22] scheme and all the biases were set to zero. For the batch normalized model (BN) we applied sequence-wise normalization to each LSTM of the baseline model. Both networks were trained using standard SGD with momentum, with a ï¬xed learning rate of 1e-4 and a ï¬xed momentum factor of 0.9. The mini-batch size is 24.
4
vee BL train â BLdev 2. BN train 5 â BNdev c Oo B 4b 2 oO b 3p = E 52 x 1 0 i L L i n (0) 20 40 60 80 100 120 Every 250 batches
Figure 1: Frame-wise cross entropy on WSJ for the baseline (blue) and batch normalized (red) networks. The dotted lines are the training curves and the solid lines are the validation curves.
# 5.2 Language Modeling
We used the Penn Treebank (PTB) [23] corpus for our language modeling experiments. We use the standard split (929k training words, 73k validation words, and 82k test words) and vocabulary of 10k words. We train a small, medium and large LSTM as described in [17]. | 1510.01378#12 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 13 | All models consist of two stacked LSTM layers and are trained with stochastic gradient descent (SGD) with a learning rate of 1 and a mini-batch size of 32.
The small LSTM has two layers of 200 memory cells, with parameters being initialized from a uniform distribution with range [-0.1, 0.1]. We back propagate across 20 time steps and the gradients are scaled according to the maximum norm of the gradients whenever the norm is greater than 10. We train for 15 epochs and halve the learning rate every epoch after the 6th.
The medium LSTM has a hidden size of 650 for both layers, with parameters being initialized from a uniform distribution with range [-0.05, 0.05]. We apply dropout with probability of 50% between all layers. We back propagate across 35 time steps and gradients are scaled according to the maximum norm of the gradients whenever the norm is greater than 5. We train for 40 epochs and divide the learning rate by 1.2 every epoch after the 6th. | 1510.01378#13 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 14 | The Large LSTM has two layers of 1500 memory cells, with parameters being initialized from a uniform distribution with range [-0.04, 0.04]. We apply dropout between all layers. We back propagate across 35 time steps and gradients are scaled according to the maximum norm of the gradients whenever the norm is greater than 5. We train for 55 epochs and divide the learning rate by 1.15 every epoch after the 15th.
# 6 Results and Discussion
Figure 1 shows the training and development framewise cross entropy curves for both networks of the speech experiments. As we can see, the batch normalized networks trains faster (at some points about twice as fast as the baseline), but overï¬ts more. The best results, reported in table 1, are comparable to the ones obtained in [21].
Figure 2 shows the training and validation perplexity for the large LSTM network of the language experiment. We can also observe that the training is faster when we apply batch normalization to
5
300 - rrseene Large BL train â Large BL valid Large BN train â Large BN valid 250 200 150 Perplexity 100 50 0 10 20 30 40 50 60 Epochs
Figure 2: Large LSTM on Penn Treebank for the baseline (blue) and the batch normalized (red) networks. The dotted lines are the training curves and the solid lines are the validation curves. | 1510.01378#14 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 15 | Model Train Valid Small LSTM 78.5 119.2 Small LSTM (BN) 62.5 120.9 Medium LSTM 49.1 89.0 Medium LSTM (BN) 41.0 90.6 Large LSTM 49.3 81.8 Large LSTM (BN) 35.0 97.4
Table 2: Best perplexity on training and development sets for LSTMs on Penn Treebank.
the network. However, it also overï¬ts more than the baseline version. The best results are reported in table 2.
For both experiments we observed a faster training and a greater overï¬tting when using our version of batch normalization. This last effect is less prevalent in the speech experiment, perhaps because the training set is way bigger, or perhaps because the frame-wise normalization is less effective than the sequence-wise one. in the language modeling task we predict one character at a time, whereas we predict the whole sequence in the speech experiment.
Batch normalization also allows for higher learning rates in feedforward networks, however since we only applied batch normalization to parts of the network, higher learning rates didnât work well because they affected un-normalized parts as well. | 1510.01378#15 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 16 | Our experiments suggest that applying batch normalization to the input-to-hidden connections in RNNs can improve the conditioning of the optimization problem. Future directions include whiten- ing input-to-hidden connections [10] and normalizing the hidden state instead of just a portion of the network.
6
# Acknowledgments
Part of this work was funded by Samsung. We also want to thank Nervana Systems for providing GPUs.
# References
[1] Sergey Ioffe and Christian Szegedy, âBatch normalization: Accelerating deep network training by reducing internal covariate shift,â arXiv preprint arXiv:1502.03167, 2015.
[2] Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton, âSpeech recognition with deep recurrent neural networks,â in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 2013, pp. 6645â6649.
[3] Ilya Sutskever, Oriol Vinyals, and Quoc Le, âSequence to sequence learning with neural networks,â in Advances in Neural Information Processing Systems, 2014, pp. 3104â3112. | 1510.01378#16 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 17 | [4] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, âNeural machine translation by jointly learning to align and translate,â arXiv preprint arXiv:1409.0473, 2014.
[5] Tom´aËs Mikolov, âStatistical language models based on neural networks,â Presentation at Google, Mountain View, 2nd April, 2012.
[6] Sepp Hochreiter and J¨urgen Schmidhuber, âLong short-term memory,â Neural computation, vol. 9, no. 8, pp. 1735â1780, 1997.
[7] Will Williams, Niranjani Prasad, David Mrva, Tom Ash, and Tony Robinson, âScaling recur- rent neural network language models,â arXiv preprint arXiv:1502.00512, 2015.
[8] Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al., âDeepspeech: Scaling up end-to-end speech recognition,â arXiv preprint arXiv:1412.5567, 2014. | 1510.01378#17 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 18 | [9] Yann A LeCun, L´eon Bottou, Genevieve B Orr, and Klaus-Robert M¨uller, âEfï¬cient back- prop,â in Neural networks: Tricks of the trade, pp. 9â48. Springer, 2012.
[10] Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, and Koray Kavukcuoglu, âNatural neural networks,â arXiv preprint arXiv:1507.00210, 2015.
[11] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhi- heng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei, âImageNet Large Scale Visual Recognition Challenge,â International Journal of Computer Vision (IJCV), pp. 1â42, April 2015.
[12] Mike Schuster and Kuldip K Paliwal, âBidirectional recurrent neural networks,â Signal Pro- cessing, IEEE Transactions on, vol. 45, no. 11, pp. 2673â2681, 1997. | 1510.01378#18 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 19 | [13] Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio, âHow to construct deep recurrent neural networks,â arXiv preprint arXiv:1312.6026, 2013.
[14] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio, âOn the difï¬culty of training recurrent neural networks,â arXiv preprint arXiv:1211.5063, 2012.
[15] Felix A Gers, Nicol N Schraudolph, and J¨urgen Schmidhuber, âLearning precise timing with lstm recurrent networks,â The Journal of Machine Learning Research, vol. 3, pp. 115â143, 2003.
[16] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi- nov, âDropout: A simple way to prevent neural networks from overï¬tting,â The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929â1958, 2014. | 1510.01378#19 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 20 | [17] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals, âRecurrent neural network regulariza- tion,â arXiv preprint arXiv:1409.2329, 2014.
[18] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio, âTheano: new features and speed improve- ments,â Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
[19] B. van Merri¨enboer, D. Bahdanau, V. Dumoulin, D. Serdyuk, D. Warde-Farley, J. Chorowski, and Y. Bengio, âBlocks and Fuel: Frameworks for deep learning,â ArXiv e-prints, June 2015.
7
Model Train Valid Best Baseline 1.05 1.10 Best Batch Norm 1.07 1.11
Table 3: Best frame-wise crossentropy for the best baseline network and for the best batch normal- ized one. | 1510.01378#20 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 21 | Table 3: Best frame-wise crossentropy for the best baseline network and for the best batch normal- ized one.
[20] Douglas B Paul and Janet M Baker, âThe design for the wall street journal-based csr corpus,â in Proceedings of the workshop on Speech and Natural Language. Association for Computational Linguistics, 1992, pp. 357â362.
[21] Alan Graves, Navdeep Jaitly, and Abdel-rahman Mohamed, âHybrid speech recognition with deep bidirectional lstm,â in Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on. IEEE, 2013, pp. 273â278.
[22] Xavier Glorot and Yoshua Bengio, âUnderstanding the difï¬culty of training deep feedforward neural networks,â in International conference on artiï¬cial intelligence and statistics, 2010, pp. 249â256.
[23] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini, âBuilding a large an- notated corpus of english: The penn treebank,â Computational linguistics, vol. 19, no. 2, pp. 313â330, 1993.
# A Experimentations with Normalization Inside the Recurrence | 1510.01378#21 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 22 | # A Experimentations with Normalization Inside the Recurrence
In our ï¬rst experiments we investigated if batch normalization can be applied in the same way as in a feedforward network (equation 17). We tried it on a language modelling task on the PennTreebank dataset, where the goal was to predict the next characters of a ï¬xed length sequence of 100 symbols.
The network is composed of a lookup table of dimension 250 followed by 3 layers of simple recur- rent networks with 250 hidden units each. A dimension 50 softmax layer is added on the top. In the batch normalized networks, we apply batch normalization to the hidden-to-hidden transition, as in equation 17, meaning that we compute one mean and one variance for each of the 250 features at each time step. For inference, we also keep track of the statistics for each time step. However, we used the same γ and β for each time step. | 1510.01378#22 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 23 | The lookup table is randomly initialized using an isotropic Gaussian with zero mean and unit vari- ance. All the other matrices of the network are initialized using the Glorot scheme [22] and all the bias are set to zero. We used SGD with momentum. We performed a random search over the learn- ing rate (distributed in the range [0.0001, 1]), the momentum (with possible values of 0.5, 0.8, 0.9, 0.95, 0.995), and the batch size (32, 64 or 128). We let the experiment run for 20 epochs. A total of 52 experiments were performed. | 1510.01378#23 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 24 | In every experiment that we ran, the performances of batch normalized networks were always slightly worse than (or at best equivalent to) the baseline networks, except for the ones where the learning rate is too high and the baseline diverges while the batch normalized one is still able to train. Figure 3 shows an example of a working experiment. We observed that in practically all the exper- iments that converged, the normalization was actually harming the performance. Table 3 shows the results of the best baseline and batch normalized networks. We can observe that both best networks have similar performances. The settings for the best baseline are: learning rate 0.42, momentum 0.95, batch size 32. The settings for the best batch normalized network are: learning rate 3.71e-4, momentum 0.995, batch size 128.
Those results suggest that this way of applying batch normalization in the recurrent networks is not optimal. It seems that batch normalization hurts the training procedure. It may be due to the fact that we estimate new statistics at each time step, or because of the repeated application of γ and β during the recurrent procedure, which could lead to exploding or vanishing gradients. We will investigate more in depth what happens in the batch normalized networks, especially during the back-propagation.
8
4.5 Cross Entropy w w - ° u ° N u 2.0 1.5 â BLtrain â BN train 15 20 | 1510.01378#24 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.00149 | 0 | 6 1 0 2
b e F 5 1 ] V C . s c [
5 v 9 4 1 0 0 . 0 1 5 1 : v i X r a
Published as a conference paper at ICLR 2016
DEEP COMPRESSION: COMPRESSING DEEP NEURAL NETWORKS WITH PRUNING, TRAINED QUANTIZATION AND HUFFMAN CODING
# Song Han Stanford University, Stanford, CA 94305, USA [email protected]
# Huizi Mao Tsinghua University, Beijing, 100084, China [email protected]
William J. Dally Stanford University, Stanford, CA 94305, USA NVIDIA, Santa Clara, CA 95050, USA [email protected]
# ABSTRACT | 1510.00149#0 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 1 | Neural networks are both computationally intensive and memory intensive, making them difï¬cult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce âdeep compressionâ, a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35à to 49à without affecting their accuracy. Our method ï¬rst prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, ï¬nally, we apply Huffman coding. After the ï¬rst two steps we retrain the network to ï¬ne tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9à to 13Ã; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35Ã, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49à from 552MB to 11.3MB, again with no loss of accuracy. This allows ï¬tting the model into on-chip | 1510.00149#1 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 2 | the size of VGG-16 by 49à from 552MB to 11.3MB, again with no loss of accuracy. This allows ï¬tting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3à to 4à layerwise speedup and 3à to 7à better energy efï¬ciency. | 1510.00149#2 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 3 | # INTRODUCTION
Deep neural networks have evolved to the state-of-the-art technique for computer vision tasks (Krizhevsky et al., 2012)(Simonyan & Zisserman, 2014). Though these neural networks are very powerful, the large number of weights consumes considerable storage and memory bandwidth. For example, the AlexNet Caffemodel is over 200MB, and the VGG-16 Caffemodel is over 500MB (BVLC). This makes it difï¬cult to deploy deep neural networks on mobile system.
First, for many mobile-ï¬rst companies such as Baidu and Facebook, various apps are updated via different app stores, and they are very sensitive to the size of the binary ï¬les. For example, App Store has the restriction âapps above 100 MB will not download until you connect to Wi-Fiâ. As a result, a feature that increases the binary size by 100MB will receive much more scrutiny than one that increases it by 10MB. Although having deep neural networks running on mobile has many great
1
Published as a conference paper at ICLR 2016 | 1510.00149#3 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 4 | 1
Published as a conference paper at ICLR 2016
Quantization: less bits per weight Pruning: less number of weights wo s Huffman Encoding t { 1 1 1 ' 1 1 ! ! 1 same | | same ' ' 1 ' 1 1 1 1 1 ' 1 1 original same ' network accuracy accuracy , accuracy 1 1 1 | t 1 original 1 9x-13x | (Quantize the Weightlex 1 27-31% | 1 35x-49x size {reduction 9 ireduction 1 with Code âreduction Book 1 fl i 1 /
Figure 1: The three stage compression pipeline: pruning, quantization and Huffman coding. Pruning reduces the number of weights by 10Ã, while quantization further improves the compression rate: between 27Ã and 31Ã. Huffman coding gives more compression: between 35Ã and 49Ã. The compression rate already included the meta-data for sparse representation. The compression scheme doesnât incur any accuracy loss.
features such as better privacy, less network bandwidth and real time processing, the large storage overhead prevents deep neural networks from being incorporated into mobile apps.
The second issue is energy consumption. Running large neural networks require a lot of memory bandwidth to fetch the weights and a lot of computation to do dot productsâ which in turn consumes considerable energy. Mobile devices are battery constrained, making power hungry applications such as deep neural networks hard to deploy. | 1510.00149#4 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 5 | Energy consumption is dominated by memory access. Under 45nm CMOS technology, a 32 bit ï¬oating point add consumes 0.9pJ, a 32bit SRAM cache access takes 5pJ, while a 32bit DRAM memory access takes 640pJ, which is 3 orders of magnitude of an add operation. Large networks do not ï¬t in on-chip storage and hence require the more costly DRAM accesses. Running a 1 billion connection neural network, for example, at 20fps would require (20Hz)(1G)(640pJ) = 12.8W just for DRAM access - well beyond the power envelope of a typical mobile device.
Our goal is to reduce the storage and energy required to run inference on such large networks so they can be deployed on mobile devices. To achieve this goal, we present âdeep compressionâ: a three- stage pipeline (Figure 1) to reduce the storage required by neural network in a manner that preserves the original accuracy. First, we prune the networking by removing the redundant connections, keeping only the most informative connections. Next, the weights are quantized so that multiple connections share the same weight, thus only the codebook (effective weights) and the indices need to be stored. Finally, we apply Huffman coding to take advantage of the biased distribution of effective weights. | 1510.00149#5 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 6 | Our main insight is that, pruning and trained quantization are able to compress the network without interfering each other, thus lead to surprisingly high compression rate. It makes the required storage so small (a few megabytes) that all weights can be cached on chip instead of going to off-chip DRAM which is energy consuming. Based on âdeep compressionâ, the EIE hardware accelerator Han et al. (2016) was later proposed that works on the compressed model, achieving signiï¬cant speedup and energy efï¬ciency improvement.
# 2 NETWORK PRUNING | 1510.00149#6 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 7 | # 2 NETWORK PRUNING
Network pruning has been widely studied to compress CNN models. In early work, network pruning proved to be a valid way to reduce the network complexity and over-ï¬tting (LeCun et al., 1989; Hanson & Pratt, 1989; Hassibi et al., 1993; Str¨om, 1997). Recently Han et al. (2015) pruned state- of-the-art CNN models with no loss of accuracy. We build on top of that approach. As shown on the left side of Figure 1, we start by learning the connectivity via normal network training. Next, we prune the small-weight connections: all connections with weights below a threshold are removed from the network. Finally, we retrain the network to learn the ï¬nal weights for the remaining sparse connections. Pruning reduced the number of parameters by 9à and 13à for AlexNet and VGG-16 model.
2
Published as a conference paper at ICLR 2016
Span Exceeds 8=243 im [o[i][2]s]4]s]s]7][e,[s]u[n][e[ul ule if 3 value 0 Filler Zero
Figure 2: Representing the matrix sparsity with relative index. Padding ï¬ller zero to prevent overï¬ow. | 1510.00149#7 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 8 | Figure 2: Representing the matrix sparsity with relative index. Padding ï¬ller zero to prevent overï¬ow.
weights cluster index fine-tuned (32 bit float) (2 bit uint) centroids centroids 3 0 2 1 3] cluster | 1 1} 0 | 3 | 4 > of 3]1)o]r 3 1 2 2 | 0; lr gradient loroup by reduce > >
Figure 3: Weight sharing by scalar quantization (top) and centroids ï¬ne-tuning (bottom).
We store the sparse structure that results from pruning using compressed sparse row (CSR) or compressed sparse column (CSC) format, which requires 2a + n + 1 numbers, where a is the number of non-zero elements and n is the number of rows or columns.
To compress further, we store the index difference instead of the absolute position, and encode this difference in 8 bits for conv layer and 5 bits for fc layer. When we need an index difference larger than the bound, we the zero padding solution shown in Figure 2: in case when the difference exceeds 8, the largest 3-bit (as an example) unsigned number, we add a ï¬ller zero.
# 3 TRAINED QUANTIZATION AND WEIGHT SHARING | 1510.00149#8 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 9 | # 3 TRAINED QUANTIZATION AND WEIGHT SHARING
Network quantization and weight sharing further compresses the pruned network by reducing the number of bits required to represent each weight. We limit the number of effective weights we need to store by having multiple connections share the same weight, and then ï¬ne-tune those shared weights.
Weight sharing is illustrated in Figure 3. Suppose we have a layer that has 4 input neurons and 4 output neurons, the weight is a 4 Ã 4 matrix. On the top left is the 4 Ã 4 weight matrix, and on the bottom left is the 4 Ã 4 gradient matrix. The weights are quantized to 4 bins (denoted with 4 colors), all the weights in the same bin share the same value, thus for each weight, we then need to store only a small index into a table of shared weights. During update, all the gradients are grouped by the color and summed together, multiplied by the learning rate and subtracted from the shared centroids from last iteration. For pruned AlexNet, we are able to quantize to 8-bits (256 shared weights) for each CONV layers, and 5-bits (32 shared weights) for each FC layer without any loss of accuracy. | 1510.00149#9 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 10 | To calculate the compression rate, given k clusters, we only need log2(k) bits to encode the index. In general, for a network with n connections and each connection is represented with b bits, constraining the connections to have only k shared weights will result in a compression rate of:
r = nb nlog2(k) + kb (1)
For example, Figure 3 shows the weights of a single layer neural network with four input units and four output units. There are 4 Ã 4 = 16 weights originally but there are only 4 shared weights: similar weights are grouped together to share the same value. Originally we need to store 16 weights each
3
Published as a conference paper at ICLR 2016
20001 CF so Tinear quantization bor nonlinear quantization by density initialization °° clustring and finetuning finear initialization 15000 andom inkiazation 10000} a density 5000| cummulative alstribution 0.10 =B.05 3.00 05 Tio T0108 002 0.00 a0z 0.08 0.06 weight value weight value
Figure 4: Left: Three different methods for centroids initialization. Right: Distribution of weights (blue) and distribution of codebook before (green cross) and after ï¬ne-tuning (red dot). | 1510.00149#10 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 11 | has 32 bits, now we need to store only 4 effective weights (blue, green, red and orange), each has 32 bits, together with 16 2-bit indices giving a compression rate of 16 â 32/(4 â 32 + 2 â 16) = 3.2
3.1 WEIGHT SHARING
We use k-means clustering to identify the shared weights for each layer of a trained network, so that all the weights that fall into the same cluster will share the same weight. Weights are not shared across layers. We partition n original weights W = {w1, we Wy} into k clusters C = {c1,c2,..., ck} n > k, so as to minimize the within-cluster sum of squares (WCSS):
k arg min ) | Ss wâ ei? (2) i=1 wee;
Different from HashNet (Chen et al., 2015) where weight sharing is determined by a hash function before the networks sees any training data, our method determines weight sharing after a network is fully trained, so that the shared weights approximate the original network.
INITIALIZATION OF SHARED WEIGHTS | 1510.00149#11 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 12 | INITIALIZATION OF SHARED WEIGHTS
Centroid initialization impacts the quality of clustering and thus affects the networkâs prediction accuracy. We examine three initialization methods: Forgy(random), density-based, and linear initialization. In Figure 4 we plotted the original weightsâ distribution of conv3 layer in AlexNet (CDF in blue, PDF in red). The weights forms a bimodal distribution after network pruning. On the bottom it plots the effective weights (centroids) with 3 different initialization methods (shown in blue, red and yellow). In this example, there are 13 clusters.
Forgy (random) initialization randomly chooses k observations from the data set and uses these as the initial centroids. The initialized centroids are shown in yellow. Since there are two peaks in the bimodal distribution, Forgy method tend to concentrate around those two peaks.
Density-based initialization linearly spaces the CDF of the weights in the y-axis, then ï¬nds the horizontal intersection with the CDF, and ï¬nally ï¬nds the vertical intersection on the x-axis, which becomes a centroid, as shown in blue dots. This method makes the centroids denser around the two peaks, but more scatted than the Forgy method. | 1510.00149#12 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 13 | Linear initialization linearly spaces the centroids between the [min, max] of the original weights. This initialization method is invariant to the distribution of the weights and is the most scattered compared with the former two methods.
Larger weights play a more important role than smaller weights (Han et al., 2015), but there are fewer of these large weights. Thus for both Forgy initialization and density-based initialization, very few centroids have large absolute value which results in poor representation of these few large weights. Linear initialization does not suffer from this problem. The experiment section compares the accuracy
4
Published as a conference paper at ICLR 2016
3 8
100000 220000 75000 165000 50000 5 110000 8 25000 55000 oO oO 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 1°93 5 7 9 11 13 15 17 19 21 23 25 27 29 31 Weight Index (32 Effective Weights) Sparse Matrix Location Index (Max Diff is 32)
Figure 5: Distribution for weight (Left) and index (Right). The distribution is biased.
of different initialization methods after clustering and ï¬ne-tuning, showing that linear initialization works best.
3.3 FEED-FORWARD AND BACK-PROPAGATION | 1510.00149#13 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 14 | of different initialization methods after clustering and ï¬ne-tuning, showing that linear initialization works best.
3.3 FEED-FORWARD AND BACK-PROPAGATION
The centroids of the one-dimensional k-means clustering are the shared weights. There is one level of indirection during feed forward phase and back-propagation phase looking up the weight table. An index into the shared weight table is stored for each connection. During back-propagation, the gradient for each shared weight is calculated and used to update the shared weight. This procedure is shown in Figure 3.
We denote the loss by L, the weight in the ith column and jth row by Wij, the centroid index of element Wi,j by Iij, the kth centroid of the layer by Ck. By using the indicator function 1(.), the gradient of the centroids is calculated as:
OL OL OW;; OL Ss J (Liz = k) (3) ag aC OWij OC, OW;
4 HUFFMAN CODING
A Huffman code is an optimal preï¬x code commonly used for lossless data compression(Van Leeuwen, 1976). It uses variable-length codewords to encode source symbols. The table is derived from the occurrence probability for each symbol. More common symbols are represented with fewer bits. | 1510.00149#14 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 15 | Figure 5 shows the probability distribution of quantized weights and the sparse matrix index of the last fully connected layer in AlexNet. Both distributions are biased: most of the quantized weights are distributed around the two peaks; the sparse matrix index difference are rarely above 20. Experiments show that Huffman coding these non-uniformly distributed values saves 20% â 30% of network storage.
# 5 EXPERIMENTS
We pruned, quantized, and Huffman encoded four networks: two on MNIST and two on ImageNet data-sets. The network parameters and accuracy-1 before and after pruning are shown in Table 1. The compression pipeline saves network storage by 35Ã to 49Ã across different networks without loss of accuracy. The total size of AlexNet decreased from 240MB to 6.9MB, which is small enough to be put into on-chip SRAM, eliminating the need to store the model in energy-consuming DRAM memory. | 1510.00149#15 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 16 | Training is performed with the Caffe framework (Jia et al., 2014). Pruning is implemented by adding a mask to the blobs to mask out the update of the pruned connections. Quantization and weight sharing are implemented by maintaining a codebook structure that stores the shared weight, and group-by-index after calculating the gradient of each layer. Each shared weight is updated with all the gradients that fall into that bucket. Huffman coding doesnât require training and is implemented ofï¬ine after all the ï¬ne-tuning is ï¬nished.
5.1 LENET-300-100 AND LENET-5 ON MNIST
We ï¬rst experimented on MNIST dataset with LeNet-300-100 and LeNet-5 network (LeCun et al., 1998). LeNet-300-100 is a fully connected network with two hidden layers, with 300 and 100
1Reference model is from Caffe model zoo, accuracy is measured without data augmentation
5
Published as a conference paper at ICLR 2016
Table 1: The compression pipeline can save 35Ã to 49Ã parameter storage with no loss of accuracy. | 1510.00149#16 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 17 | 5
Published as a conference paper at ICLR 2016
Table 1: The compression pipeline can save 35Ã to 49Ã parameter storage with no loss of accuracy.
Network Top-1 Error Top-5 Error Parameters Compress Rate LeNet-300-100 Ref LeNet-300-100 Compressed LeNet-5 Ref LeNet-5 Compressed AlexNet Ref AlexNet Compressed VGG-16 Ref VGG-16 Compressed 1.64% 1.58% 0.80% 0.74% 42.78% 42.78% 31.50% 31.17% - - - - 19.73% 19.70% 11.32% 10.91% 1070 KB 27 KB 1720 KB 44 KB 240 MB 6.9 MB 552 MB 11.3 MB 40Ã 39Ã 35Ã 49Ã
Table 2: Compression statistics for LeNet-300-100. P: pruning, Q:quantization, H:Huffman coding. | 1510.00149#17 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 18 | Table 2: Compression statistics for LeNet-300-100. P: pruning, Q:quantization, H:Huffman coding.
Layer ip1 ip2 ip3 Total #Weights 235K 30K 1K 266K Weights% (P) 8% 9% 26% 8%(12Ã) Weight bits (P+Q) 6 6 6 6 Weight bits (P+Q+H) 4.4 4.4 4.3 5.1 Index bits (P+Q) 5 5 5 5 Index bits (P+Q+H) 3.7 4.3 3.2 3.7 Compress rate (P+Q) 3.1% 3.8% 15.7% 3.1% (32Ã) Compress rate (P+Q+H) 2.32% 3.04% 12.70% 2.49% (40Ã)
Table 3: Compression statistics for LeNet-5. P: pruning, Q:quantization, H:Huffman coding. | 1510.00149#18 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 19 | Table 3: Compression statistics for LeNet-5. P: pruning, Q:quantization, H:Huffman coding.
Layer conv1 conv2 ip1 ip2 Total #Weights 0.5K 25K 400K 5K 431K Weights% (P) 66% 12% 8% 19% 8%(12Ã) Weight bits (P+Q) 8 8 5 5 5.3 Weight bits (P+Q+H) 7.2 7.2 4.5 5.2 4.1 Index bits (P+Q) 5 5 5 5 5 Index bits (P+Q+H) 1.5 3.9 4.5 3.7 4.4 Compress rate (P+Q) 78.5% 6.0% 2.7% 6.9% 3.05% (33Ã) Compress rate (P+Q+H) 67.45% 5.28% 2.45% 6.13% 2.55% (39Ã) | 1510.00149#19 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 20 | neurons each, which achieves 1.6% error rate on Mnist. LeNet-5 is a convolutional network that has two convolutional layers and two fully connected layers, which achieves 0.8% error rate on Mnist. Table 2 and table 3 show the statistics of the compression pipeline. The compression rate includes the overhead of the codebook and sparse indexes. Most of the saving comes from pruning and quantization (compressed 32Ã), while Huffman coding gives a marginal gain (compressed 40Ã)
5.2 ALEXNET ON IMAGENET
We further examine the performance of Deep Compression on the ImageNet ILSVRC-2012 dataset, which has 1.2M training examples and 50k validation examples. We use the AlexNet Caffe model as the reference model, which has 61 million parameters and achieved a top-1 accuracy of 57.2% and a top-5 accuracy of 80.3%. Table 4 shows that AlexNet can be compressed to 2.88% of its original size without impacting accuracy. There are 256 shared weights in each CONV layer, which are encoded with 8 bits, and 32 shared weights in each FC layer, which are encoded with only 5 bits. The relative sparse index is encoded with 4 bits. Huffman coding compressed additional 22%, resulting in 35Ã compression in total.
5.3 VGG-16 ON IMAGENET | 1510.00149#20 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 21 | 5.3 VGG-16 ON IMAGENET
With promising results on AlexNet, we also looked at a larger, more recent network, VGG-16 (Si- monyan & Zisserman, 2014), on the same ILSVRC-2012 dataset. VGG-16 has far more convolutional layers but still only three fully-connected layers. Following a similar methodology, we aggressively compressed both convolutional and fully-connected layers to realize a signiï¬cant reduction in the number of effective weights, shown in Table5.
The VGG16 network as a whole has been compressed by 49Ã. Weights in the CONV layers are represented with 8 bits, and FC layers use 5 bits, which does not impact the accuracy. The two largest fully-connected layers can each be pruned to less than 1.6% of their original size. This reduction
6
Published as a conference paper at ICLR 2016
Table 4: Compression statistics for AlexNet. P: pruning, Q: quantization, H:Huffman coding. | 1510.00149#21 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 22 | 6
Published as a conference paper at ICLR 2016
Table 4: Compression statistics for AlexNet. P: pruning, Q: quantization, H:Huffman coding.
Layer conv1 conv2 conv3 conv4 conv5 fc6 fc7 fc8 Total #Weights 35K 307K 885K 663K 442K 38M 17M 4M 61M Weights% (P) 84% 38% 35% 37% 37% 9% 9% 25% 11%(9Ã) Weight bits (P+Q) 8 8 8 8 8 5 5 5 5.4 Weight bits (P+Q+H) 6.3 5.5 5.1 5.2 5.6 3.9 3.6 4 4 Index bits (P+Q) 4 4 4 4 4 4 4 4 4 Index bits (P+Q+H) 1.2 2.3 2.6 2.5 2.5 3.2 3.7 3.2 3.2 Compress rate (P+Q) 32.6% 14.5% 13.1% 14.1% 14.0% 3.0% 3.0% 7.3% 3.7% (27Ã) | 1510.00149#22 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 24 | Layer conv1 1 conv1 2 conv2 1 conv2 2 conv3 1 conv3 2 conv3 3 conv4 1 conv4 2 conv4 3 conv5 1 conv5 2 conv5 3 fc6 fc7 fc8 Total #Weights 2K 37K 74K 148K 295K 590K 590K 1M 2M 2M 2M 2M 2M 103M 17M 4M 138M Weights% (P) 58% 22% 34% 36% 53% 24% 42% 32% 27% 34% 35% 29% 36% 4% 4% 23% 7.5%(13Ã) Weigh bits (P+Q) 8 8 8 8 8 8 8 8 8 8 8 8 8 5 5 5 6.4 Weight bits (P+Q+H) 6.8 6.5 5.6 5.9 4.8 4.6 4.6 4.6 4.2 4.4 4.7 4.6 4.6 3.6 4 4 4.1 Index bits (P+Q) 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 Index bits (P+Q+H) 1.7 2.6 2.4 2.3 1.8 2.9 2.2 2.6 2.9 2.5 2.5 2.7 2.3 3.5 4.3 3.4 3.1 Compress | 1510.00149#24 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 26 | Compress rate (P+Q+H) 29.97% 6.99% 8.91% 9.31% 11.15% 5.67% 8.96% 7.29% 5.93% 7.47% 8.00% 6.52% 7.79% 1.10% 1.25% 5.24% 2.05% (49Ã)
is critical for real time image processing, where there is little reuse of these layers across images (unlike batch processing). This is also critical for fast object detection algorithms where one CONV pass is used by many FC passes. The reduced layers will ï¬t in an on-chip SRAM and have modest bandwidth requirements. Without the reduction, the bandwidth requirements are prohibitive.
# 6 DISCUSSIONS
6.1 PRUNING AND QUANTIZATION WORKING TOGETHER | 1510.00149#26 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 27 | # 6 DISCUSSIONS
6.1 PRUNING AND QUANTIZATION WORKING TOGETHER
Figure 6 shows the accuracy at different compression rates for pruning and quantization together or individually. When working individually, as shown in the purple and yellow lines, accuracy of pruned network begins to drop signiï¬cantly when compressed below 8% of its original size; accuracy of quantized network also begins to drop signiï¬cantly when compressed below 8% of its original size. But when combined, as shown in the red line, the network can be compressed to 3% of original size with no loss of accuracy. On the far right side compared the result of SVD, which is inexpensive but has a poor compression rate.
The three plots in Figure 7 show how accuracy drops with fewer bits per connection for CONV layers (left), FC layers (middle) and all layers (right). Each plot reports both top-1 and top-5 accuracy. Dashed lines only applied quantization but without pruning; solid lines did both quantization and pruning. There is very little difference between the two. This shows that pruning works well with quantization.
Quantization works well on pruned network because unpruned AlexNet has 60 million weights to quantize, while pruned AlexNet has only 6.7 million weights to quantize. Given the same amount of centroids, the latter has less error.
7 | 1510.00149#27 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 29 | Figure 6: Accuracy v.s. compression rate under different compression methods. Pruning and quantization works best when combined.
4 top5, quantized only © tops, pruned + quantized topt, quantized only © topt, pruned + quantized 85% © top, quantized only © topS, pruned + quantized top, quantized only © tops, pruned + quantized topt, quantized only © topt, pruned + quantized topt, quantized only © topt, pruned + quantized 85% 85% 68% 88% 68% > > 5 sis F si F sam 3 20% B sem 3 os 8 s4% B sae Mo 2 < < 17% 17% 17% 0% 0% 0% ibit 2bits Sits dbits bits Gbits bits abits âbit 2bits bits bits Sbits Gbits Tots Abts âbit 2bits bits 4bits Shits Gbits 7bits Abts Number of bits per effective weight in all Number of bits per effective weight in all Number of bits per effective weight in FC layers Conv layers all layers | 1510.00149#29 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 30 | © top, quantized only © topS, pruned + quantized topt, quantized only © topt, pruned + quantized 85% 68% > 5 sis 3 20% 8 s4% 2 17% 0% ibit 2bits Sits dbits bits Gbits bits abits Number of bits per effective weight in all FC layers
top, quantized only © tops, pruned + quantized topt, quantized only © topt, pruned + quantized 85% 88% F si B sem B sae < 17% 0% âbit 2bits bits bits Sbits Gbits Tots Abts Number of bits per effective weight in all Conv layers
4 top5, quantized only © tops, pruned + quantized topt, quantized only © topt, pruned + quantized 85% 68% > F sam 3 os Mo < 17% 0% âbit 2bits bits 4bits Shits Gbits 7bits Abts Number of bits per effective weight in all layers | 1510.00149#30 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 31 | Figure 7: Pruning doesnât hurt quantization. Dashed: quantization on unpruned network. Solid: quantization on pruned network; Accuracy begins to drop at the same number of quantization bits whether or not the network has been pruned. Although pruning made the number of parameters less, quantization still works well, or even better(3 bits case on the left ï¬gure) as in the unpruned network.
â© uniform init + density init © random init â© uniform init + density init © random init 58% 81% 3 56% 3 79% 5 5 3 3 2 54% 2 76% T Bd So 52% So 74% 50% 71% 2bits bits 4bits Sbits 6bits 7bits 8bits 2bits bits 4bits Sbits 6bits 7bits 8bits Number of bits per effective weight Number of bits per effective weight
â© uniform init + density init © random init 58% 3 56% 5 3 2 54% T So 52% 50% 2bits bits 4bits Sbits 6bits 7bits 8bits Number of bits per effective weight
â© uniform init + density init © random init 81% 3 79% 5 3 2 76% Bd So 74% 71% 2bits bits 4bits Sbits 6bits 7bits 8bits Number of bits per effective weight | 1510.00149#31 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 32 | Figure 8: Accuracy of different initialization methods. Left: top-1 accuracy. Right: top-5 accuracy. Linear initialization gives best result.
The ï¬rst two plots in Figure 7 show that CONV layers require more bits of precision than FC layers. For CONV layers, accuracy drops signiï¬cantly below 4 bits, while FC layer is more robust: not until 2 bits did the accuracy drop signiï¬cantly.
6.2 CENTROID INITIALIZATION
Figure 8 compares the accuracy of the three different initialization methods with respect to top-1 accuracy (Left) and top-5 accuracy (Right). The network is quantized to 2 â¼ 8 bits as shown on x-axis. Linear initialization outperforms the density initialization and random initialization in all cases except at 3 bits.
The initial centroids of linear initialization spread equally across the x-axis, from the min value to the max value. That helps to maintain the large weights as the large weights play a more important role than smaller ones, which is also shown in network pruning Han et al. (2015). Neither random nor density-based initialization retains large centroids. With these initialization methods, large weights are clustered to the small centroids because there are few large weights. In contrast, linear initialization allows large weights a better chance to form a large centroid.
8
Published as a conference paper at ICLR 2016 | 1510.00149#32 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 33 | 8
Published as a conference paper at ICLR 2016
ll CPU Dense (Basenline) Ml CPU Pruned ®@ GPU Dense ® GPUPruned lM TK1 Dense ® TK1 Pruned 1x8 if aa rf aa. a, dx 1.0x 1.0x 1x ok xed AlexNet_Fc6 = AlexNet_Fc7 = AlexNet_Fc8 + VGGNet_Fc6 VGGNet_Fc7 VGGNet_Fc8 Geo Mean 100x 3 g Speedup (normzlized to CPU) x eS x
Figure 9: Compared with the original network, pruned network layer achieved 3Ã speedup on CPU, 3.5Ã on GPU and 4.2Ã on mobile GPU on average. Batch size = 1 targeting real time processing. Performance number normalized to CPU.
ll CPU Dense (Baseline) Mi CPU Pruned ® GPU Dense M GPU Pruned @® TK1 Dense M@ TK1 Pruned TeRLLee AlexNet_Fc6 = AlexNet_Fc7 â AlexNet_Fc8 + VGGNet_Fc6 VGGNet_Fc7 VGGNet_Fc8 Geo Mean 100x Energy Efficiency (normzlized to CPU) | 1510.00149#33 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 34 | Figure 10: Compared with the original network, pruned network layer takes 7Ã less energy on CPU, 3.3Ã less on GPU and 4.2Ã less on mobile GPU on average. Batch size = 1 targeting real time processing. Energy number normalized to CPU.
6.3 SPEEDUP AND ENERGY EFFICIENCY
Deep Compression is targeting extremely latency-focused applications running on mobile, which requires real-time inference, such as pedestrian detection on an embedded processor inside an autonomous vehicle. Waiting for a batch to assemble signiï¬cantly adds latency. So when bench- marking the performance and energy efï¬ciency, we consider the case when batch size = 1. The cases of batching are given in Appendix A. | 1510.00149#34 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 35 | Fully connected layer dominates the model size (more than 90%) and got compressed the most by Deep Compression (96% weights pruned in VGG-16). In state-of-the-art object detection algorithms such as fast R-CNN (Girshick, 2015), upto 38% computation time is consumed on FC layers on uncompressed model. So itâs interesting to benchmark on FC layers, to see the effect of Deep Compression on performance and energy. Thus we setup our benchmark on FC6, FC7, FC8 layers of AlexNet and VGG-16. In the non-batched case, the activation matrix is a vector with just one column, so the computation boils down to dense / sparse matrix-vector multiplication for original / pruned model, respectively. Since current BLAS library on CPU and GPU doesnât support indirect look-up and relative indexing, we didnât benchmark the quantized model. | 1510.00149#35 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 36 | We compare three different off-the-shelf hardware: the NVIDIA GeForce GTX Titan X and the Intel Core i7 5930K as desktop processors (same package as NVIDIA Digits Dev Box) and NVIDIA Tegra K1 as mobile processor. To run the benchmark on GPU, we used cuBLAS GEMV for the original dense layer. For the pruned sparse layer, we stored the sparse matrix in in CSR format, and used cuSPARSE CSRMV kernel, which is optimized for sparse matrix-vector multiplication on GPU. To run the benchmark on CPU, we used MKL CBLAS GEMV for the original dense model and MKL SPBLAS CSRMV for the pruned sparse model. | 1510.00149#36 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 37 | To compare power consumption between different systems, it is important to measure power at a consistent manner (NVIDIA, b). For our analysis, we are comparing pre-regulation power of the entire application processor (AP) / SOC and DRAM combined. On CPU, the benchmark is running on single socket with a single Haswell-E class Core i7-5930K processor. CPU socket and DRAM power are as reported by the pcm-power utility provided by Intel. For GPU, we used nvidia-smi utility to report the power of Titan X. For mobile GPU, we use a Jetson TK1 development board and measured the total power consumption with a power-meter. We assume 15% AC to DC conversion loss, 85% regulator efï¬ciency and 15% power consumed by peripheral components (NVIDIA, a) to report the AP+DRAM power for Tegra K1.
9
Published as a conference paper at ICLR 2016
Table 6: Accuracy of AlexNet with different aggressiveness of weight sharing and quantization. 8/5 bit quantization has no loss of accuracy; 8/4 bit quantization, which is more hardware friendly, has negligible loss of accuracy of 0.01%; To be really aggressive, 4/2 bit quantization resulted in 1.99% and 2.60% loss of accuracy. | 1510.00149#37 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 38 | #CONV bits / #FC bits 32bits / 32bits 8 bits / 5 bits 8 bits / 4 bits 4 bits / 2 bits Top-1 Error Top-5 Error 42.78% 42.78% 42.79% 44.77% 19.73% 19.70% 19.73% 22.33% Top-1 Error Increase - 0.00% 0.01% 1.99% Top-5 Error Increase - -0.03% 0.00% 2.60%
The ratio of memory access over computation characteristic with and without batching is different. When the input activations are batched to a matrix the computation becomes matrix-matrix multipli- cation, where locality can be improved by blocking. Matrix could be blocked to ï¬t in caches and reused efï¬ciently. In this case, the amount of memory access is O(n2), and that of computation is O(n3), the ratio between memory access and computation is in the order of 1/n. | 1510.00149#38 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 39 | In real time processing when batching is not allowed, the input activation is a single vector and the computation is matrix-vector multiplication. In this case, the amount of memory access is O(n2), and the computation is O(n2), memory access and computation are of the same magnitude (as opposed to 1/n). That indicates MV is more memory-bounded than MM. So reducing the memory footprint is critical for the non-batching case.
Figure 9 illustrates the speedup of pruning on different hardware. There are 6 columns for each benchmark, showing the computation time of CPU / GPU / TK1 on dense / pruned network. Time is normalized to CPU. When batch size = 1, pruned network layer obtained 3à to 4à speedup over the dense network on average because it has smaller memory footprint and alleviates the data transferring overhead, especially for large matrices that are unable to ï¬t into the caches. For example VGG16âs FC6 layer, the largest layer in our experiment, contains 25088 à 4096 à 4 Bytes â 400M B data, which is far from the capacity of L3 cache.
In those latency-tolerating applications , batching improves memory locality, where weights could be blocked and reused in matrix-matrix multiplication. In this scenario, pruned network no longer shows its advantage. We give detailed timing results in Appendix A. | 1510.00149#39 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 40 | Figure 10 illustrates the energy efï¬ciency of pruning on different hardware. We multiply power consumption with computation time to get energy consumption, then normalized to CPU to get energy efï¬ciency. When batch size = 1, pruned network layer consumes 3à to 7à less energy over the dense network on average. Reported by nvidia-smi, GPU utilization is 99% for both dense and sparse cases.
6.4 RATIO OF WEIGHTS, INDEX AND CODEBOOK
Pruning makes the weight matrix sparse, so extra space is needed to store the indexes of non-zero elements. Quantization adds storage for a codebook. The experiment section has already included these two factors. Figure 11 shows the breakdown of three different components when quantizing four networks. Since on average both the weights and the sparse indexes are encoded with 5 bits, their storage is roughly half and half. The overhead of codebook is very small and often negligible.
@ Weight @ Index © Codebook AlexNet VGGNet Lenet-300-100 Lenet-5
Figure 11: Storage ratio of weight, index and codebook.
10
Published as a conference paper at ICLR 2016 | 1510.00149#40 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 41 | Figure 11: Storage ratio of weight, index and codebook.
10
Published as a conference paper at ICLR 2016
Table 7: Comparison with other compression methods on AlexNet. (Collins & Kohli, 2014) reduced the parameters by 4Ã and with inferior accuracy. Deep Fried Convnets(Yang et al., 2014) worked on fully connected layers and reduced the parameters by less than 4Ã. SVD save parameters but suffers from large accuracy loss as much as 2%. Network pruning (Han et al., 2015) reduced the parameters by 9Ã, not including index overhead. On other networks similar to AlexNet, (Denton et al., 2014) exploited linear structure of convnets and compressed the network by 2.4Ã to 13.4Ã layer wise, with 0.9% accuracy loss on compressing a single layer. (Gong et al., 2014) experimented with vector quantization and compressed the network by 16Ã to 24Ã, incurring 1% accuracy loss.
Top-1 Error Top-5 Error 42.78% 41.93% 42.90% 44.40% 44.02% 42.77% 42.78% 42.78% 19.73% - - - 20.56% 19.67% 19.70% 19.70% Parameters 240MB 131MB 64MB 61MB 47.6MB 27MB 8.9MB 6.9MB | 1510.00149#41 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 42 | Compress Rate 1Ã 2Ã 3.7Ã 4Ã 5Ã 9Ã 27Ã 35Ã
# 7 RELATED WORK
Neural networks are typically over-parametrized, and there is signiï¬cant redundancy for deep learning models(Denil et al., 2013). This results in a waste of both computation and memory usage. There have been various proposals to remove the redundancy: Vanhoucke et al. (2011) explored a ï¬xed- point implementation with 8-bit integer (vs 32-bit ï¬oating point) activations. Hwang & Sung (2014) proposed an optimization method for the ï¬xed-point network with ternary weights and 3-bit activations. Anwar et al. (2015) quantized the neural network using L2 error minimization and achieved better accuracy on MNIST and CIFAR-10 datasets.Denton et al. (2014) exploited the linear structure of the neural network by ï¬nding an appropriate low-rank approximation of the parameters and keeping the accuracy within 1% of the original model.
The empirical success in this paper is consistent with the theoretical study of random-like sparse networks with +1/0/-1 weights (Arora et al., 2014), which have been proved to enjoy nice properties (e.g. reversibility), and to allow a provably polynomial time algorithm for training. | 1510.00149#42 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 43 | Much work has been focused on binning the network parameters into buckets, and only the values in the buckets need to be stored. HashedNets(Chen et al., 2015) reduce model sizes by using a hash function to randomly group connection weights, so that all connections within the same hash bucket share a single parameter value. In their method, the weight binning is pre-determined by the hash function, instead of being learned through training, which doesnât capture the nature of images. Gong et al. (2014) compressed deep convnets using vector quantization, which resulted in 1% accuracy loss. Both methods studied only the fully connected layer, ignoring the convolutional layers.
There have been other attempts to reduce the number of parameters of neural networks by replacing the fully connected layer with global average pooling. The Network in Network architecture(Lin et al., 2013) and GoogLenet(Szegedy et al., 2014) achieves state-of-the-art results on several benchmarks by adopting this idea. However, transfer learning, i.e. reusing features learned on the ImageNet dataset and applying them to new tasks by only ï¬ne-tuning the fully connected layers, is more difï¬cult with this approach. This problem is noted by Szegedy et al. (2014) and motivates them to add a linear layer on the top of their networks to enable transfer learning. | 1510.00149#43 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 44 | Network pruning has been used both to reduce network complexity and to reduce over-ï¬tting. An early approach to pruning was biased weight decay (Hanson & Pratt, 1989). Optimal Brain Damage (LeCun et al., 1989) and Optimal Brain Surgeon (Hassibi et al., 1993) prune networks to reduce the number of connections based on the Hessian of the loss function and suggest that such pruning is more accurate than magnitude-based pruning such as weight decay. A recent work (Han et al., 2015) successfully pruned several state of the art large scale networks and showed that the number of parameters could be reduce by an order of magnitude. There are also attempts to reduce the number of activations for both compression and acceleration Van Nguyen et al. (2015).
11
Published as a conference paper at ICLR 2016
# 8 FUTURE WORK | 1510.00149#44 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 45 | 11
Published as a conference paper at ICLR 2016
# 8 FUTURE WORK
While the pruned network has been benchmarked on various hardware, the quantized network with weight sharing has not, because off-the-shelf cuSPARSE or MKL SPBLAS library does not support indirect matrix entry lookup, nor is the relative index in CSC or CSR format supported. So the full advantage of Deep Compression that ï¬t the model in cache is not fully unveiled. A software solution is to write customized GPU kernels that support this. A hardware solution is to build custom ASIC architecture specialized to traverse the sparse and quantized network structure, which also supports customized quantization bit width. We expect this architecture to have energy dominated by on-chip SRAM access instead of off-chip DRAM access.
# 9 CONCLUSION | 1510.00149#45 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 46 | # 9 CONCLUSION
We have presented âDeep Compressionâ that compressed neural networks without affecting accuracy. Our method operates by pruning the unimportant connections, quantizing the network using weight sharing, and then applying Huffman coding. We highlight our experiments on AlexNet which reduced the weight storage by 35à without loss of accuracy. We show similar results for VGG-16 and LeNet networks compressed by 49à and 39à without loss of accuracy. This leads to smaller storage requirement of putting convnets into mobile app. After Deep Compression the size of these networks ï¬t into on-chip SRAM cache (5pJ/access) rather than requiring off-chip DRAM memory (640pJ/access). This potentially makes deep neural networks more energy efï¬cient to run on mobile. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained.
# REFERENCES
Anwar, Sajid, Hwang, Kyuyeon, and Sung, Wonyong. Fixed point optimization of deep convolutional neural networks for object recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 1131â1135. IEEE, 2015. | 1510.00149#46 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 47 | Arora, Sanjeev, Bhaskara, Aditya, Ge, Rong, and Ma, Tengyu. Provable bounds for learning some deep representations. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, pp. 584â592, 2014.
# BVLC. Caffe model zoo. URL http://caffe.berkeleyvision.org/model_zoo.
Chen, Wenlin, Wilson, James T., Tyree, Stephen, Weinberger, Kilian Q., and Chen, Yixin. Compress- ing neural networks with the hashing trick. arXiv preprint arXiv:1504.04788, 2015.
Collins, Maxwell D and Kohli, Pushmeet. Memory bounded deep convolutional networks. arXiv preprint arXiv:1412.1442, 2014.
Denil, Misha, Shakibi, Babak, Dinh, Laurent, de Freitas, Nando, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pp. 2148â2156, 2013. | 1510.00149#47 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 48 | Denton, Emily L, Zaremba, Wojciech, Bruna, Joan, LeCun, Yann, and Fergus, Rob. Exploiting linear structure within convolutional networks for efï¬cient evaluation. In Advances in Neural Information Processing Systems, pp. 1269â1277, 2014.
Girshick, Ross. Fast r-cnn. arXiv preprint arXiv:1504.08083, 2015.
Gong, Yunchao, Liu, Liu, Yang, Ming, and Bourdev, Lubomir. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
Han, Song, Pool, Jeff, Tran, John, and Dally, William J. Learning both weights and connections for efï¬cient neural networks. In Advances in Neural Information Processing Systems, 2015.
Han, Song, Liu, Xingyu, Mao, Huizi, Pu, Jing, Pedram, Ardavan, Horowitz, Mark A, and Dally, William J. EIE: Efï¬cient inference engine on compressed deep neural network. arXiv preprint arXiv:1602.01528, 2016.
12
Published as a conference paper at ICLR 2016 | 1510.00149#48 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 49 | 12
Published as a conference paper at ICLR 2016
Hanson, Stephen Jos´e and Pratt, Lorien Y. Comparing biases for minimal network construction with back-propagation. In Advances in neural information processing systems, pp. 177â185, 1989.
Hassibi, Babak, Stork, David G, et al. Second order derivatives for network pruning: Optimal brain surgeon. Advances in neural information processing systems, pp. 164â164, 1993.
Hwang, Kyuyeon and Sung, Wonyong. Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1. In Signal Processing Systems (SiPS), 2014 IEEE Workshop on, pp. 1â6. IEEE, 2014.
Jia, Yangqing, Shelhamer, Evan, Donahue, Jeff, Karayev, Sergey, Long, Jonathan, Girshick, Ross, Guadarrama, Sergio, and Darrell, Trevor. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convolutional neural networks. In NIPS, pp. 1097â1105, 2012. | 1510.00149#49 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 50 | LeCun, Yann, Denker, John S, Solla, Sara A, Howard, Richard E, and Jackel, Lawrence D. Optimal brain damage. In NIPs, volume 89, 1989.
LeCun, Yann, Bottou, Leon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
Lin, Min, Chen, Qiang, and Yan, Shuicheng. Network in network. arXiv:1312.4400, 2013.
NVIDIA. Technical brief: NVIDIA jetson TK1 development kit bringing GPU-accelerated computing to embedded systems, a. URL http://www.nvidia.com.
NVIDIA. Whitepaper: GPU-based deep learning inference: A performance and power analysis, b. URL http://www.nvidia.com/object/white-papers.html.
Simonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Str¨om, Nikko. Phoneme probability estimation with dynamic sparsely connected artiï¬cial neural networks. The Free Speech Journal, 1(5):1â41, 1997. | 1510.00149#50 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 51 | Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.
Van Leeuwen, Jan. On the construction of huffman trees. In ICALP, pp. 382â410, 1976.
Van Nguyen, Hien, Zhou, Kevin, and Vemulapalli, Raviteja. Cross-domain synthesis of medical images using efï¬cient location-sensitive deep network. In Medical Image Computing and Computer- Assisted InterventionâMICCAI 2015, pp. 677â684. Springer, 2015.
Vanhoucke, Vincent, Senior, Andrew, and Mao, Mark Z. Improving the speed of neural networks on cpus. In Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2011.
Yang, Zichao, Moczulski, Marcin, Denil, Misha, de Freitas, Nando, Smola, Alex, Song, Le, and Wang, Ziyu. Deep fried convnets. arXiv preprint arXiv:1412.7149, 2014.
13 | 1510.00149#51 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 52 | 13
Published as a conference paper at ICLR 2016
A APPENDIX: DETAILED TIMING / POWER REPORTS OF DENSE & SPARSE NETWORK LAYERS
Table 8: Average time on different layers. To avoid variance, we measured the time spent on each layer for 4096 input samples, and averaged the time regarding each input sample. For GPU, the time consumed by cudaMalloc and cudaMemcpy is not counted. For batch size = 1, gemv is used; For batch size = 64, gemm is used. For sparse case, csrmv and csrmm is used, respectively. | 1510.00149#52 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 53 | Time (us) Titan X Core i7-5930k Tegra K1 dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) AlexNet FC6 541.5 134.8 19.8 94.6 7516.2 3066.5 318.4 1417.6 12437.2 2879.3 1663.6 4003.9 AlexNet FC7 243.0 65.8 8.9 51.5 6187.1 1282.1 188.9 682.1 5765.0 1256.5 2056.8 1372.8 AlexNet FC8 80.5 54.6 5.9 23.2 1134.9 890.5 45.8 407.7 2252.1 837.0 298.0 576.7 VGG16 FC6 1467.8 167.0 53.6 121.5 35022.8 3774.3 1056.0 1780.3 35427.0 4377.2 2001.4 8024.8 VGG16 FC7 243.0 | 1510.00149#53 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 55 | Table 9: Power consumption of different layers. We measured the Titan X GPU power with nvidia-smi, Core i7-5930k CPU power with pcm-power and Tegra K1 mobile GPU power with an external power meter (scaled to AP+DRAM, see paper discussion). During power measurement, we repeated each computation multiple times in order to get stable numbers. On CPU, dense matrix multiplications consume 2x energy than sparse ones because it is accelerated with multi-threading. | 1510.00149#55 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1510.00149 | 56 | Power (Watts) TitanX Core i7-5930k Tegra K1 dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) AlexNet FC6 157 181 168 156 83.5 42.3 85.4 37.2 5.1 5.9 5.6 5.0 AlexNet FC7 159 183 173 158 72.8 37.4 84.7 37.1 5.1 6.1 5.6 4.6 AlexNet FC8 159 162 166 163 77.6 36.5 101.6 38 5.4 5.8 6.3 5.1 VGG16 FC6 166 189 173 160 70.6 38.0 83.1 39.5 5.3 5.6 5.4 4.8 VGG16 FC7 163 166 173 158 74.6 37.4 97.1 36.6 5.3 6.3 5.6 4.7
14
VGG16 FC8 80.5 48.0 5.9 22.0 774.2 777.3 45.7 363.1 2243.1 745.1 483.9 544.1 | 1510.00149#56 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | [
{
"id": "1504.08083"
},
{
"id": "1504.04788"
},
{
"id": "1602.01528"
}
] |
1509.03005 | 0 | 5 1 0 2
p e S 0 1 ] G L . s c [
1 v 5 0 0 3 0 . 9 0 5 1 : v i X r a
# Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
David Balduzzi School of Mathematics and Statistics Victoria University of Wellington Wellington, New Zealand
[email protected]
Muhammad Ghifary School of Engineering and Computer Science Victoria University of Wellington Wellington, New Zealand
[email protected]
# Abstract
This paper proposes GProp, a deep reinforcement learning algorithm for continuous poli- cies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-diï¬erence based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which com- prises three neural networks that estimate the value function, its gradient, and determine the actorâs policy respectively.
We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforce- ment learning algorithms to accurately estimate gradients; and the octopus arm, a challeng- ing reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm. | 1509.03005#0 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 1 | Keywords: policy gradient, reinforcement learning, deep learning, gradient estimation, temporal diï¬erence learning
# 1. Introduction
In reinforcement learning, an agent learns to maximize its discounted future rewards (Sutton and Barto, 1998). The structure of the environment is initially unknown, so the agent must both learn the rewards associated with various action-sequence pairs and optimize its policy. A natural approach is to tackle the subproblems separately via a critic and an actor (Barto et al., 1983; Konda and Tsitsiklis, 2000), where the critic estimates the value of diï¬erent actions and the actor maximizes rewards by following the policy gradient (Sutton et al., 1999; Peters and Schaal, 2006; Silver et al., 2014). Policy gradient methods have proven useful in settings with high-dimensional continuous action spaces, especially when task- relevant policy representations are at hand (Deisenroth et al., 2011; Levine et al., 2015; Wahlstr¨om et al., 2015).
In the supervised setting, representation or deep learning algorithms have recently demonstrated remarkable performance on a range of benchmark problems. However, the problem of
1
Balduzzi and Ghifary | 1509.03005#1 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 2 | In the supervised setting, representation or deep learning algorithms have recently demonstrated remarkable performance on a range of benchmark problems. However, the problem of
1
Balduzzi and Ghifary
learning features for reinforcement learning remains comparatively underdeveloped. The most dramatic recent success uses Q-learning over ï¬nite action spaces, and essentially build a neural network critic (Mnih et al., 2015). Here, we consider continuous action spaces, and develop an algorithm that simultaneously learns the value function and its gradient, which it then uses to ï¬nd the optimal policy.
# 1.1 Outline
This paper presents Value-Gradient Backpropagation (GProp), a deep actor-critic algorithm for continuous action spaces with compatible function approximation. Our starting point is the deterministic policy gradient and associated compatibility conditions derived in (Silver et al., 2014). Roughly speaking, the compatibility conditions are that
C1. the critic approximate the gradient of the value-function and
C2. the approximation is closely related to the gradient of the policy.
See Theorem 2 for details. We identify and solve two problems with prior work on policy gradients â relating to the two compatibility conditions: | 1509.03005#2 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 3 | See Theorem 2 for details. We identify and solve two problems with prior work on policy gradients â relating to the two compatibility conditions:
P1. Temporal diï¬erence methods do not directly estimate the gradient of the value function. Instead, temporal diï¬erence methods are applied to learn an approximation of the form Qv(s) + Qw(s, a), where Qv(s) estimates the value of a state, given the current policy, and Qw(s, a) estimates the advantage from deviating from the current policy (Sutton et al., 1999; Peters and Schaal, 2006; Deisenroth et al., 2011; Silver et al., 2014). Although the advantage is related to the gradient of the value function, it is not the same thing. | 1509.03005#3 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 4 | P2. The representations used for compatible approximation scale badly on neural networks. The second problem is that prior work has restricted to advantage functions constructed from a particular state-action representation, Ï(s, a) = âθ µθ(s)(a â µθ(s)), that de- pends on the gradient of the policy. The representation is easy to handle for linear policies. However, if the policy is a neural network, then the standard state-action representation ties the critic too closely to the actor and depends on the internal struc- ture of the actor, Example 2. As a result, weight updates cannot be performed by backpropagation, see section 5.5.
The paper makes three novel contributions. The ï¬rst two contributions relate directly to problems P1 and P2. The third is a new task designed to test the accuracy of gradient estimates.
Method to directly learn the gradient of the value function. The ï¬rst contribution is to modify temporal diï¬erence learning so that it directly estimates the gradient of the value-function. The gradient perturbation trick, Lemma 3, provides a way to simultaneously estimate both the value of a function at a point and its gradient, by perturbing the functionâs input with uncorrelated Gaussian noise. | 1509.03005#4 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 5 | Plugging in a neural network instead of a linear estimator extends the trick to the problem of learning a function and its gradient over the entire state-action space. Moreover, the trick combines naturally with temporal diï¬erence methods, Theorem 5, and is therefore well-suited to applications in reinforcement learning.
2
Compatible Value Gradients for Deep Reinforcement Learning
Deviator-Actor-Critic (DAC) model with compatible function approximation. The second contribution is to propose the Deviator-Actor-Critic (DAC) model, Deï¬nition 2, consisting in three coupled neural networks and Value-Gradient Backpropagation (GProp), Algorithm 1, which backpropagates three diï¬erent signals to train the three networks. The main result, Theorem 6, is that GProp has compatible function approximation when im- plemented on the DAC model when the neural network consists in linear and rectilinear units.1
The proof relies on decomposing the Actor-network into individual units that are con- sidered as actors in their own right, based on ideas in (Srivastava et al., 2014; Balduzzi, 2015). It also suggests interesting connections to work on structural credit assignment in multiagent reinforcement learning (Agogino and Tumer, 2004, 2008; HolmesParker et al., 2014). | 1509.03005#5 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 6 | Contextual bandit task to probe the accuracy of gradient estimates. A third contribution, that may be of independent interest, is a new contextual bandit setting de- signed to probe the ability of reinforcement learning algorithms to estimate gradients. A supervised-to-contextual bandit transform was proposed in (Dud´ık et al., 2014) as a method for turning classiï¬cation datasets into K-armed contextual bandit datasets.
We are interested in the continuous setting in this paper. We therefore adapt their transform with a twist. The SARCOS and Barrett datasets from robotics have features corresponding to the positions, velocities and accelerations of seven joints and labels corre- sponding to their torques. There are 7 joints in both cases, so the feature and label spaces are 21 and 7 dimensional respectively. The datasets are traditionally used as regression benchmarks labeled SARCOS1 through SARCOS7 where the task is to predict the torque of a single joint â and similarly for Barrett. | 1509.03005#6 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 7 | We convert the two datasets into two continuous contextual bandit tasks where the reward signal is the negative distance to the correct label 7-dimensional. The algorithm is thus âtoldâ that the label lies on a sphere in a 7-dimensional space. The missing information required to pin down the labelâs position is precisely the gradient. For an algorithm to make predictions that are competitive with fully supervised methods, it is necessary to ï¬nd extremely accurate gradient estimates.
Experiments. Section 6 evaluates the performance of GProp on the contextual bandit problems described above and on the challenging octopus arm task (Engel et al., 2005). We show that GProp is able to simultaneously solve seven nonparametric regression prob- lems without observing any labels â instead using the distance between its actions and the correct labels. It turns out that GProp is competitive with recent fully supervised learning algorithms on the task. Finally, we evaluate GProp on the octopus arm benchmark, where it achieves the best performance reported to date.
1. The proof also holds for maxpooling, weight-tying and other features of convnets. A description of how closely related results extend to convnets is provided in (Balduzzi, 2015).
3
Balduzzi and Ghifary
# 1.2 Related work | 1509.03005#7 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 8 | 3
Balduzzi and Ghifary
# 1.2 Related work
An early reinforcement learning algorithm for neural networks is REINFORCE (Williams, 1992). A disadvantage of REINFORCE is that the entire network is trained with a single scalar signal.
Our proposal builds on ideas introduced with deep Q-learning (Mnih et al., 2015), such as replay. However, deep Q-learning is restricted to ï¬nite action spaces, whereas we are concerned with continuous action spaces.
Policy gradients were introduced in (Sutton et al., 1999) and have been used extensively (Kakade, 2001; Peters and Schaal, 2006; Deisenroth et al., 2011). The deterministic policy gradient was introduced in (Silver et al., 2014), which also proposed the algorithm COPDAC-Q. The relationship between GProp and COPDAC-Q is discussed in detail in section 5.5.
An alternate approach, based on the idea of backpropagating the gradient of the value function, is developed in (Jordan and Jacobs, 1990; Prokhorov and Wunsch, 1997; Wang and Si, 2001; Hafner and Riedmiller, 2011; Fairbank and Alonso, 2012; Fairbank et al., 2013). Unfortunately, these algorithms do not have compatible function approximation in general, so there are no guarantees on actor-critic interactions. See section 5.5 for further discussion. | 1509.03005#8 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 9 | The analysis used to prove compatible function approximation relies on decomposing the Actor neural network into a collection of agents corresponding to the units in the network. The relation between GProp and the diï¬erence-based objective proposed for multiagent learning (Agogino and Tumer, 2008; HolmesParker et al., 2014) is discussed in section 5.4.
# 1.3 Notation
We use boldface to denote vectors, subscripts for time, and superscripts for individual units in a network. Sets of parameters are capitalized (Î, W, V) when they refer to matrices or to the parameters of neural networks.
# 2. Deterministic Policy Gradients
This section recalls previous work on policy gradients. The basic idea is to simultaneously train an actor and a critic. The critic learns an estimate of the value of diï¬erent policies; the actor then follows the gradient of the value-function to ï¬nd an optimal (or locally optimal) policy in terms of expected rewards.
# 2.1 The Policy Gradient Theorem | 1509.03005#9 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 10 | # 2.1 The Policy Gradient Theorem
The environment is modeled as a Markov Decision Process consisting of state space S C Râ¢, action space A C R%, initial distribution p(s) on states, stationary transition distribution P(St+1|Sz,a,) and reward function r: S x AR. A policy is a function pg: S > A from states to actions. We will often add noise to policies, causing them to be stochastic. In this case, the policy is a function ty: S > Ay, where A, is the set of probability distributions on actions.
Let p(s > sâ,y2) denote the distribution on states sâ at time ¢ given policy w and initial state s at ¢ = 0 and let pÂ¥(sâ) = J fo7'pi(s)pi(s > sâ,p)ds. Let rf =
4
# Compatible Value Gradients for Deep Reinforcement Learning
Ï =t Î³Ï âtr(sÏ , aÏ ) be the discounted future reward. Deï¬ne the | 1509.03005#10 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 11 | 4
# Compatible Value Gradients for Deep Reinforcement Learning
Ï =t Î³Ï âtr(sÏ , aÏ ) be the discounted future reward. Deï¬ne the
value of a state-action pair: value of a policy: Qµθ (s, a) = E[rγ J(µθ) = 1 |S1 = s, A1 = a; µθ] E sâ¼Ïµ,aâ¼ÂµÎ¸ [Qµθ (s, a)]. and
The aim is to ï¬nd the policy θâ := argmaxθ J(µθ) with maximal value. A natural ap- proach is to follow the gradient (Sutton et al., 1999), which in the deterministic case can be computed explicitly as
Theorem 1 (policy gradient) Under reasonable assumptions on the regularity of the Markov Decision Process the policy gradient can be computed as
â θ J(µθ) = E sâ¼Ïµ â θ µθ(s) â a Qµ(s, a)|a=µθ(s) .
Proof See (Silver et al., 2014).
# 2.2 Linear Compatible Function Approximation | 1509.03005#11 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 12 | Proof See (Silver et al., 2014).
# 2.2 Linear Compatible Function Approximation
Since the agent does not have direct access to the value function Qµ, it must instead learn an estimate Qw â Qµ. A suï¬cient condition for when plugging an estimate Qw(s, a) into the policy gradient âθ J(θ) = E[âθ µθ(s) âa Qµθ (s, a)|a=µθ(s)] yields an unbiased estimator was ï¬rst proposed in (Sutton et al., 1999).
A suï¬cient condition in the deterministic setting is:
Theorem 2 (compatible value function approximation) The value-estimate Qw(s, a) satisï¬es is compatible with the policy gradient, that is
â θ J(θ) = E â θ µθ(s) · â a Qw(s, a)|a=µθ(s)
if the following conditions hold:
# C1. Qw approximates the value gradient:
The weights learned by the approximate value function must satisfy w = argmin,, â¬or (9, where | 1509.03005#12 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 13 | if the following conditions hold:
# C1. Qw approximates the value gradient:
The weights learned by the approximate value function must satisfy w = argmin,, â¬or (9, where
lar (0,w) = ly Qâ¢(S, a) ja=py(s) â VY QO" (8, a)ja=puo(s) â| (a)
is the mean-square diï¬erence between the gradient of the true value function Qµ and the approximation Qw.
# C2. Qw is policy-compatible:
The gradients of the value-function and the policy must satisfy
VQ" (8, a) ja=o(s) = ( Vv Hy(s),w). (2)
5
|
Balduzzi and Ghifary
Proof See (Silver et al., 2014).
Having stated the compatibility condition, it is worth revisiting the problems that we propose to tackle in the paper. The ï¬rst problem is to directly estimate the gradient of the value function, as required by Eq. (1) in condition C1. The standard approach used in the literature is to estimate the value function, or the closely related advantage function, using temporal diï¬erence learning, and then compute the derivative of the estimate. The next section shows how the gradient can be estimated directly. | 1509.03005#13 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 14 | The second problem relates to the compatibility condition on policy and value gradients required by Eq. (2) in condition C2. The only function approximation satisfying C2 that has been proposed is
Example 1 (standard value function approximation) Let $(s) be an m-dimensional feature representation on states and set o(s,a) := Vo p9(s)- (a - Ho(s)). Then the value function approximation
QY(s,a) = (p(s,a),w) +($(8),v) = (aâ Hols)" V He(s)⢠w+ (s)T-v. eS ââ 0 advantage function
satisï¬es condition C2 of Theorem 2.
The approximation in Example 1 encounters serious problems when applied to deep policies, see discussion in section 5.5.
# 3. Learning Value Gradients
In this section, we tackle the ï¬rst problem by modifying temporal-diï¬erence (TD) learning so that it directly estimates the gradient of the value function. First, we developed a new approach to estimating the gradient of a black-box function at a point, based on perturbing the function with gaussian noise. It turns out that the approach extends easily to learning the gradient of a black-box function across its entire domain. Moreover, it is easy to combine with neural networks and temporal diï¬erence learning. | 1509.03005#14 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 15 | # 3.1 Estimating the gradient of an unknown function at a point
Gradient estimates have been intensively studied in bandit problems, where rewards (or losses) are observed but labels are not. Thus, in contrast to supervised learning where it is possible to compute the gradient of the loss, in bandit problems the gradient must be estimated. More formally, consider the following setup.
Deï¬nition 1 (zeroth-order black-box) A function f : Rd â R is a zeroth-order black-box if it can only be queried for zeroth- order information. That is, User can request the value f (x) of f at any point x â Rd, but cannot request the gradient of the function.
We use the shorthand black-box in what follows.
6
|
Compatible Value Gradients for Deep Reinforcement Learning
The black-box model for optimization was introduced in (Nemirovski and Yudin, 1983), see (Raginsky and Rakhlin, 2011) for a recent exposition. In those papers, a black-box consists in a ï¬rst-order oracle that can provide both zeroth-order information (the value of the function) and ï¬rst-order information (the gradient or subgradient of the function). | 1509.03005#15 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 16 | Remark 1 (reward function is a black-box; value function is not) The reward function r(s, a) is a black box since Nature does not provide gradient informa- tion. The value function Qµθ (s, a) = E[rγ 1 |S1 = s, A1 = a; µθ] is not even a black-box: it cannot be queried directly since it is deï¬ned as the expected discounted future reward. It is for this reason the gradient perturbation trick must be combined with temporal diï¬erence learning, see section 3.4.
An important insight is that the gradient of an unknown function at a speciï¬c point can be estimated by perturbing its input (Flaxman et al., 2005). For example, for small δ > 0 the gradient of f : Rd â R is approximately â f (x)|x=µ â d · Eu[ f (µ+δu) u] where the expectation is over vectors sampled uniformly from the unit sphere.
The following lemma provides a simple method for estimating the gradient of a function at a point based on Gaussian perturbations:
Lemma 3 (gradient perturbation trick) The gradient of diï¬erentiable f : Rd â R at µ â Rd is | 1509.03005#16 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 17 | Lemma 3 (gradient perturbation trick) The gradient of diï¬erentiable f : Rd â R at µ â Rd is
2 V fC) x= = lim argmin {min E ety) (rw +) â (w,e) â b) \ . (3) 020 werd | bER e~N(
Proof By taking sufficiently small variance, we can assume that f is locally linear. Setting b = f(w) yields a line through the origin. It therefore suffices to consider the special case f(x) = (v,x). Setting
w = aramin [5 (( 6) ly, 6) ; werd â¬~N(0,0?-1q) |2
we are required to show that w* = v. The problem is convex, so setting the gradient to zero requires to solve 0 = E [(w âv,e)- el, which reduces to solving the set of linear equations
# d
d Yi(w" â v') Efeâeâ] = (w! â vw!) E[(e)?] = (w? â vo!) -0? =0 for all j. i=1
The first equality holds since Efeâeâ] = 0. It follows immediately that w* = v.
# 3.2 Learning gradients across a range | 1509.03005#17 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 18 | The first equality holds since Efeâeâ] = 0. It follows immediately that w* = v.
# 3.2 Learning gradients across a range
The solution to the optimization problem in Eq. (3) is the gradient â f (x) of f at a particular µ â Rd. The next step is to learn a function GW : Rd â Rd that approximates the gradient across a range of values.
7
|
# Balduzzi and Ghifary
More precisely, given a sample {xi}n i=1 â¼ PX of points, we aim to ï¬nd
n W* := argmin ) > [iIv ff (xi) â GW (x;)||"| . Ww il
The next lemma considers the case where QY and GW are linear estimates, of the form QY (x) := ((x),v), and GW (x) = W- w(x) for fixed representations @ : X > R⢠and w:X +Râ. | 1509.03005#18 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 19 | Lemma 4 (gradient learning) Let f : R¢ > R be a differentiable function. Suppose that @ : X â R⢠andy: X + Râ are representations such that there exists an m-vector v* and a (d x n)-matrix W* satisfying f(x) = (@(x), v*) and V f = W*- Y(x) for all x in the sample.
If we deï¬ne loss function
⬠((W,V,x,0) = (20 46) â(GW(x),)â arn) |
then
W* = lim argminmin E_ [e(W, V,x,o)]. e550 WwW VioxvP
Proof Follows from Lemma 3.
In short, the lemma reduces gradient estimation to a simple optimization problem given a good enough representation. Jumping ahead slightly to section 4, we ensure that our model has good enough representations by constructing two neural networks to learn them. The ï¬rst neural network, QV : Rd â R, learns an approximation to f (x) that plays the role of the baseline b. The second neural network, GW : Rd â Rd learns an approximation to the gradient.
# 3.3 Temporal diï¬erence learning | 1509.03005#19 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 20 | # 3.3 Temporal diï¬erence learning
Recall that Qµ(s, a) is the expected value of a state-action pair given policy µ. It is never observed directly, since it is computed by discounting over future rewards. TD-learning is a popular approach to estimating Qµ through dynamic programming (Sutton and Barto, 1998).
We quickly review TD-learning. Let Ï : S à A â Rm be a ï¬xed representation. The goal is to ï¬nd a value-estimate
Q"(s.a) := (6(s.a),v),
where v is an m-dimensional vector, that is as close as possible to the true value function. If the value-function were known, we could simply minimize the mean-square error with respect to v:
fuse(v) = [(ors.a) - a%(e.a))']. (s,a)~(o" 1)
8
# a
# Compatible Value Gradients for Deep Reinforcement Learning
Unfortunately, it is impossible to minimize the mean-square error directly, since the value- function is the expected discounted future reward, rather than the reward. That is, the value function is not provided explicitly by the environment â not even as a black-box. The Bellman error is therefore used a substitute for the mean-square error: | 1509.03005#20 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 21 | TD-error, 6 r âââââââ.. lowly) = Ey (ra) +12") m(s))) -2"(s.a) ) sa)~(ph, â__â<$<ââ =QH(s,a)
2
|
where sâ is the state subsequent to s.
Let δt = rt â Qv(st, at) + γQv(st+1, µθ(st+1)) be the TD-error. TD-learning updates v according to
vt+1 â vt + ηt · δt · â v Qv(st, at) = vt + ηt · δt · Ï(s, a), (4)
where ηt is a sequence of learning rates. The convergence properties of TD-learning and related algorithms have been studied extensively, see (Tsitsiklis and Roy, 1997; Dann et al., 2014).
# 3.4 Temporal diï¬erence gradient (TDG) learning
Finally, we apply temporal diï¬erence methods to estimate the gradient 2 of the value func- tion, as required by condition C1 of Theorem 2. We are interested in gradient approxima- tions of the form
QW (s, a, â¬) = (G(s,a),â¬) = (W- Y(s,a),â¬), | 1509.03005#21 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 22 | QW (s, a, â¬) = (G(s,a),â¬) = (W- Y(s,a),â¬),
where wy : S x A â Râ and W is a (d x n)-dimensional matrix. The goal is to find W* such that GW" (s,a) & Ve QH(s, a, â¬)\e-0 = Va Q4(s, a) |a=y4(s) for all sampled state-action pairs.
It is convenient to introduce notation QH(s,a,e) := Q4(s,a+e) and shorthand s := (S, 4o(s)). Then, analogously to the mean-square, define the perturbed gradient error:
trcelw, Wie?) = 8,8 |(Q"@e)â(Gâ¢@).â¬)- Q°@) |.
Given a good enough representation, Lemma 4 guarantees that minimizing the perturbed gradient error yields the gradient of the value function. Unfortunately, as discussed above, the value function cannot be queried directly. We therefore introduce the Bellman gradient error as a proxy
TDG-error, ⬠.ODww⢠v. Wo?) = 8, 8 [(76.6) +7@°@)â(G%).«)â 9) =QH(,â¬)
|
. | 1509.03005#22 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 23 | |
.
2. Residual gradient (RG) and gradient temporal diï¬erence (GTD) methods were introduced in (Baird, 1995; Sutton et al., 2009a,b). The similar names may be confusing. RG and GTD methods are TD methods derived from gradient descent. In contrast, we develop a TD-based approach to learning gra- dients. The two approaches are thus complementary and straightforward to combine. However, in this paper we restrict to extending vanilla TD to learning gradients.
9
# Balduzzi and Ghifary
Set the TDG-error as
& = r(Se) + Â¥Q* Bi41) â (GY Gr), â¬) â QY (&)
and, analogously to Eq. (4), deï¬ne the TDG-updates
Vi4t â Vet me: &- VQ (61) =vitm-&- (8) Wisi â We + me te - VQ i) = Wi +m: &-⬠@ W(),
where ⬠@ (8) is the (d x n) matrix given by the outer product. We refer to â¬- ⬠as the perturbed TDG-error.
The following extension theorem allows us to import guarantees from temporal-diï¬erence learning to temporal-diï¬erence gradient learning. | 1509.03005#23 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 24 | The following extension theorem allows us to import guarantees from temporal-diï¬erence learning to temporal-diï¬erence gradient learning.
Theorem 5 (zeroth to ï¬rst-order extension) Guarantees on TD-learning extend to TDG-learning.
The idea is to reformulate TDG-learning as TD-learning, with a slightly diï¬erent reward function and function approximation. Since the function approximation is still linear, any guarantees on convergence for TD-learning transfered automatically to TDG-learning.
Proof First, we incorporate ⬠into the state-action pair. Define 7(s,a,â¬) := r(s,a+e) and
ab(s,a,â¬) =⬠@ w(s, a).
Second, we define a dot product on matrices of equal size by flattening them down to vectors. More precisely, given two matrices A and B of the same dimension (m x n), define the dot-product (A,B) = et Aj; Bij. It is easy to see that
GW(s,a) = (W- o(s,a), â¬) = (h(s,a,â¬), W).
The TDG-error can then be rewritten as | 1509.03005#24 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 25 | GW(s,a) = (W- o(s,a), â¬) = (h(s,a,â¬), W).
The TDG-error can then be rewritten as
& =F(s,a,â¬) +7QY(s', aâ, â¬â) â QV (s,a,â¬)
where QY'W(s, a, â¬) = (4(s, a), v) + (W(s, a, â¬), W) is a linear function approximation.
If we are in a setting where TD-learning is guaranteed to converge to the value-function, it follows that TDG-learning is also guaranteed to converge â since it is simply a differ- ent linear approximation. Thus, Qâ(8,â¬) ~ QY(8) + GW(8,e) and the result follows by Lemma 4. a
# 4. Algorithm: Value-Gradient Backpropagation
This section presents our model, which consists of three coupled neural networks that learn to estimate the value function, its gradient, and the optimal policy respectively.
10
# Compatible Value Gradients for Deep Reinforcement Learning
Deï¬nition 2 (deviator-actor-critic) The deviator-actor-critic (DAC) model consists in three neural networks: | 1509.03005#25 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 26 | Deï¬nition 2 (deviator-actor-critic) The deviator-actor-critic (DAC) model consists in three neural networks:
actor-network with policy µΠ: S â A â Rd; ⢠critic-network, QV : S à A â R, that estimates the value function; and
⢠deviator-network, GW : S à A â Rd, that estimates the gradient of the value function.
Gaussian noise is added to the policy during training resulting in actions a = fe(s) + ⬠where ⬠~ N(0,07-1q). The outputs of the critic and deviator are combined as
(5 als).6) = 0% (otal) + (@⢠(5 t0(8):¢)
The Gaussian noise plays two roles. Firstly, it controls the explore/exploit tradeoï¬ by controlling the extent to which Actor deviates from its current optimal policy. Secondly, it controls the âresolutionâ at which Deviator estimates the gradient.
The three networks are trained by backpropagating three diï¬erent signals. Critic, De- viator and Actor backpropagate the TDG-error, the perturbed TDG-error, and Deviatorâs gradient estimate respectively; see Algorithm 1. An explicit description of the weight up- dates of individual units is provided in Appendix A. | 1509.03005#26 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 27 | Deviator estimates the gradient of the value-function with respect to deviations ⬠from the current policy. Backpropagating the gradient through Actor allows to estimate the influence of Actor-parameters on the value function as a function of their effect on the policy.
Algorithm 1: Value-Gradient Backpropagation (GProp).
for rounds t = 1, 2, . . . , T do
rounds t= 1,2,...,T do Network gets state s;, responds a; = He, (sr) + â¬, gets reward r; Let § := (s, 9(s)). & â re + YQY* (S41) â QY*(&) â (GW*(&), â¬) // compute TDG-error Or41 â ©: + nf! - Vo Me, (st) - GY (8) // backpropagate GW View â Vit nf -& Vv QY'(&) // backpropagate ⬠Wii â Wit nb -&- Vw GW (8;) -⬠// backpropagate â¬-⬠| 1509.03005#27 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
1509.03005 | 28 | Critic and Deviator learn representations suited to estimating the value function and its gradient respectively. Note that even though the gradient is a linear function at a point, it can be a highly nonlinear function in general. Similarly, Actor learns a policy representation. We set the learning rates of Critic and Deviator to be equal (n⬠= np ) in the experiments in section 6. However, the perturbation ⬠has the effect of slowing down and stabilizing Deviator updates:
Remark 2 (stability) The magnitude of Deviatorâs weight updates depend on ⬠~ N(0,0? -Iy) since they are computed by backpropagating the perturbed TDG-error &-â¬. Thus as 0? + 0, Deviatorâs learning rate essentially tends to zero. In general, Deviator learns more slowly than Critic.
11
Balduzzi and Ghifary
This has a stabilizing eï¬ect on the policy since Actor is insulated from Critic â its weight updates only depend (directly) on the output of Deviator.
# 5. Analysis: Deep Compatible Function Approximation
Our main result is that the deviatorâs value gradient is compatible with the policy gradient of each unit in the actor-network â considered as an actor in its own right: | 1509.03005#28 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | [
{
"id": "1502.02251"
},
{
"id": "1509.01851"
},
{
"id": "1504.00702"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.