id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1612.02136#38
Mode Regularized Generative Adversarial Networks
inverseâ of the generator, by reversing the order of layers and replacing the de-convolutional layers with convolutional layers. One has to pay particular attention to batch normalization layers. In DCGAN, there are batch nor- malization layers both in the generator and the discriminator. However, two classes of data go through the batch normalization layers in the generator. One come from sampled noise z, the other one come from the encoder. In our implementation, we separate the batch statistics for these two classes of data in the generator, while keeping the parameters of BN layer to be shared. In this way, the batch statistics of these two kinds of batches cannot interfere with each other.
1612.02136#37
1612.02136#39
1612.02136
[ "1511.05440" ]
1612.02136#39
Mode Regularized Generative Adversarial Networks
# C APPENDIX: ADDITIONAL SYNTHESIZED EXPERIMENTS To demonstrate the effectiveness of mode-regularized GANs proposed in this paper, we train a very simple GAN architecture on synthesized 2D dataset, following Metz et al. (2016). The data is sampled from a mixture of 6 Gaussians, with standard derivation of 0.1. The means of the Gaussians are placed around a circle with radius 5. The generator network has two ReLU hidden layers with 128 neurons. It generates 2D output samples from 3D uniform noise from [0,1]. The discriminator consists of only one fully connected layer of ReLU neurons, mapping the 2D input to
1612.02136#38
1612.02136#40
1612.02136
[ "1511.05440" ]
1612.02136#40
Mode Regularized Generative Adversarial Networks
11 Published as a conference paper at ICLR 2017 a real 1D number. Both networks are optimized with the Adam optimizer with the learning rate of 1e-4. In the regularized version, we choose λ1 = λ2 = 0.005. The comparison between the generator distribution from standard GAN and our proposed regularized GAN are shown in Figure 9. oo. GAN . - . . â . . | . \ « â oo â s â + N Reg-GAN + 7 : ; . : 4 s ? â ° ° @ . ad . . .
1612.02136#39
1612.02136#41
1612.02136
[ "1511.05440" ]
1612.02136#41
Mode Regularized Generative Adversarial Networks
Epoch | Epoch 200 Epoch 400 Epoch 600 Epoch 800 Epoch 1000 Target Figure 9: Comparison results on a toy 2D mixture of Gaussians dataset. The columns on the left shows heatmaps of the generator distributions as the number of training epochs increases, whereas the rightmost column presents the target, the original data distribution. The top row shows standard GAN result. The generator has a hard time oscillating among the modes of the data distribution, and is only able to â
1612.02136#40
1612.02136#42
1612.02136
[ "1511.05440" ]
1612.02136#42
Mode Regularized Generative Adversarial Networks
recoverâ a single data mode at once. In contrast, the bottom row shows results of our regularized GAN. Its generator quickly captures the underlying multiple modes and ï¬ ts the target distribution. # D APPENDIX: COMPARISON WITH VAEGAN In this appendix section, we demonstrate the effectiveness and uniqueness of mode-regularized GANs proposed in this paper as compared to Larsen et al. (2015) in terms of its theoretical dif- ference, sample quality and number of missing modes. With regard to the theoretical difference, the optimization of VAEGAN relies on the probabilistic variational bound, namely p(x) â ¥ Eq(z|x)[log p(x|z)] â KL(q(z|x)||p(z)).
1612.02136#41
1612.02136#43
1612.02136
[ "1511.05440" ]
1612.02136#43
Mode Regularized Generative Adversarial Networks
This variational bound together with a GAN loss is optimized with several assumptions imposed in VAEGAN: 1. In general, VAE is based on the assumption that the true posterior p(z|x) can be well approximated by factorized Gaussian distribution q. 2. As to VAEGAN, It is also assumed that the maximum likelihood objectives does not con- ï¬ ict with GAN objective in terms of probabilistic framework. The ï¬ rst assumption does not necessarily hold for GANs. We have found that in some trained models of DCGANs, the real posterior p(z|x) is even not guaranteed to have only one mode, not to mention it is anything close to factorized Gaussian. We believe that this difference in probabilistic framework is an essential obstacle when one tries to use the objective of VAEGAN as a regularizer. However, in our algorithm, where we use a plain auto-encoder instead of VAE as the objective. Plain auto-encooders works better than VAE for our purposes because as long as the model G(z) is able to generate training samples, there always exists a function Eâ (x) such that G(E(x)) = x. Our encoder can therefore be viewed as being trained to approximate this real encoder Eâ
1612.02136#42
1612.02136#44
1612.02136
[ "1511.05440" ]
1612.02136#44
Mode Regularized Generative Adversarial Networks
. There are no conï¬ icts between a good GAN generator and our regularization objective. Hence, our objectives can be used as regularizers for encoding the prior knowledge that good models should be able to generate the training samples. This is why our work is essentially different from VAEGAN. In our experiments, we also believe that this is the reason why VAEGAN generates worse samples than a carefully tuned regularized GANs. In terms of sample quality and missing modes, we run the ofï¬ cial code of VAEGAN 3 with their default setting. We train VAEGAN for 30 epochs 4 and our models for only 20 epochs.
1612.02136#43
1612.02136#45
1612.02136
[ "1511.05440" ]
1612.02136#45
Mode Regularized Generative Adversarial Networks
For fairness, 3https://github.com/andersbll/autoencoding_beyond_pixels 4Note that we also trained 20-epoch version of VAEGAN, however the samples seemed worse. 12 Published as a conference paper at ICLR 2017 their model was run 3 times and the trained model with the best sample visual quality was taken for the comparison. The generated samples are shown in Figure 10. The most obvious difference between our samples and VAEGANâ s samples is the face distortion, which is consistent with our experimental results in Section 4.2.2. We conjecture that the distortions of VAEGANâ s samples are due to the conï¬ icts be- tween the two objectives, as we present above.
1612.02136#44
1612.02136#46
1612.02136
[ "1511.05440" ]
1612.02136#46
Mode Regularized Generative Adversarial Networks
In other words, the way we introduce auto-encoders as regularizers for GAN models is different from VAEGANâ s. The difference is that the second as- sumption mentioned above is not required in our approaches. In our framework, the auto-encoders helps alter the generation manifolds, leading to fewer distortions in ï¬ ne-grained details in our gen- erated samples. â _ oO EE 6 2A VAEGAN -trained VAEGAN -reported yy = 222 Figure 10: Samples generated by our models and VAEGAN. The third line are samples generated by our self-trained VAEGAN model, with default settings. The last line are generated samples reported in the original VAEGAN paper. We depict both of them here for a fair comparison. In terms of the missing modes problem, we use the same method described in Section 4.2.1 for computing the number of images with missing modes. The results are shown below. Table 4: Number of images on the missing modes on CelebA estimated by a third-party discrimina- tor. The numbers in the brackets indicate the dimension of prior z. Ï denotes the standard deviation of the added Gaussian noise applied at the input of the discriminator to regularize it. MDGAN achieves a very high reduction in the number of missing modes, in comparison to VAEGAN.
1612.02136#45
1612.02136#47
1612.02136
[ "1511.05440" ]
1612.02136#47
Mode Regularized Generative Adversarial Networks
Ï VAEGAN (100) Reg-GAN (100) Reg-GAN (200) MDGAN (200) 3.5 9720 754 3644 74 4.0 5862 42 391 13 We see that using our proposed regularizers results in a huge drop in the number of missing modes. We conjecture that the reason why VAEGAN performs very bad in our metric for missing modes is because the samples generated are of low quality, so the discriminator classiï¬ es the samples as â not on modeâ . Namely, the data generated is too far away from many real data modes. Essentially if a model generates very bad samples, we can say that the model misses all or most modes.
1612.02136#46
1612.02136#48
1612.02136
[ "1511.05440" ]
1612.02136#48
Mode Regularized Generative Adversarial Networks
To conduct more fair evaluation between VAEGAN and our methods, we also perform a blind human evaluation. Again we instructed ï¬ ve individuals to conduct this evaluation of sample variability. Without telling them which is generated by VAEGAN and which is generated by our methods, four people agree that our method wins in terms of sample diversity. One person thinks the samples are equally diverse. In conclusion, we demonstrate that our proposed mode-regularized GANs, i.e., Reg-GAN and MDGAN, are different from VAEGAN theoretically as discussed above. Such differences empiri- cally result in better sample quality and mode preserving ability, which are our main contributions.
1612.02136#47
1612.02136#49
1612.02136
[ "1511.05440" ]
1612.02136#49
Mode Regularized Generative Adversarial Networks
13
1612.02136#48
1612.02136
[ "1511.05440" ]
1612.01543#0
Towards the Limit of Network Quantization
7 1 0 2 2017 v o N 3 1 ] V C . s c [ 2 v 3 4 5 1 0 . 2 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # TOWARDS THE LIMIT OF NETWORK QUANTIZATION Yoojin Choi, Mostafa El-Khamy, and Jungwon Lee Samsung US R&D Center, San Diego, CA 92121, USA {yoojin.c,mostafa.e,jungwon2.lee}@samsung.com # ABSTRACT
1612.01543#1
1612.01543
[ "1510.03009" ]
1612.01543#1
Towards the Limit of Network Quantization
Network quantization is one of network compression techniques to reduce the re- dundancy of deep neural networks. It reduces the number of distinct network pa- rameter values by quantization in order to save the storage for them. In this paper, we design network quantization schemes that minimize the performance loss due to quantization given a compression ratio constraint. We analyze the quantitative relation of quantization errors to the neural network loss function and identify that the Hessian-weighted distortion measure is locally the right objective function for the optimization of network quantization. As a result, Hessian-weighted k-means clustering is proposed for clustering network parameters to quantize. When opti- mal variable-length binary codes, e.g., Huffman codes, are employed for further compression, we derive that the network quantization problem can be related to the entropy-constrained scalar quantization (ECSQ) problem in information the- ory and consequently propose two solutions of ECSQ for network quantization, i.e., uniform quantization and an iterative solution similar to Lloydâ s algorithm. Finally, using the simple uniform quantization followed by Huffman coding, we show from our experiments that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet, 32-layer ResNet and AlexNet, respectively. # INTRODUCTION Deep neural networks have emerged to be the state-of-the-art in the ï¬ eld of machine learning for image classiï¬ cation, object detection, speech recognition, natural language processing, and machine translation (LeCun et al., 2015). The substantial progress of neural networks however comes with high cost of computations and hardware resources resulting from a large number of parameters. For example, Krizhevsky et al. (2012) came up with a deep convolutional neural network consisting of 61 million parameters and won the ImageNet competition in 2012. It is followed by deeper neural networks with even larger numbers of parameters, e.g., Simonyan & Zisserman (2014). The large sizes of deep neural networks make it difï¬ cult to deploy them on resource-limited devices, e.g., mobile or portable devices, and network compression is of great interest in recent years to reduce computational cost and memory requirements for deep neural networks. Our interest in this paper is mainly on curtailing the size of the storage (memory) for network parameters (weights and biases).
1612.01543#0
1612.01543#2
1612.01543
[ "1510.03009" ]
1612.01543#2
Towards the Limit of Network Quantization
In particular, we focus on the network size compression by reducing the number of distinct network parameters by quantization. Besides network quantization, network pruning has been studied for network compression to remove redundant parameters permanently from neural networks (Mozer & Smolensky, 1989; LeCun et al., 1989; Hassibi & Stork, 1993; Han et al., 2015b; Lebedev & Lempitsky, 2016; Wen et al., 2016). Matrix/tensor factorization and low-rank approximation have been investigated as well to ï¬
1612.01543#1
1612.01543#3
1612.01543
[ "1510.03009" ]
1612.01543#3
Towards the Limit of Network Quantization
nd more efï¬ cient representations of neural networks with a smaller number of parameters and consequently to save computations (Sainath et al., 2013; Xue et al., 2013; Jaderberg et al., 2014; Lebedev et al., 2014; Yang et al., 2015; Liu et al., 2015; Kim et al., 2015; Tai et al., 2015; Novikov et al., 2015). Moreover, similar to network quantization, low-precision network implementation has been exam- ined in Vanhoucke et al. (2011); Courbariaux et al. (2014); Anwar et al. (2015); Gupta et al. (2015); Lin et al. (2015a). Some extremes of low-precision neural networks consisting of binary or ternary parameters can be found in Courbariaux et al. (2015); Lin et al. (2015b); Rastegari et al. (2016). We note that these are different types of network compression techniques, which can be employed on top of each other.
1612.01543#2
1612.01543#4
1612.01543
[ "1510.03009" ]
1612.01543#4
Towards the Limit of Network Quantization
1 Published as a conference paper at ICLR 2017 The most related work to our investigation in this paper can be found in Gong et al. (2014); Han et al. (2015a), where a conventional quantization method using k-means clustering is employed for net- work quantization. This conventional approach however is proposed with little consideration for the impact of quantization errors on the neural network performance loss and no effort to optimize the quantization procedure for a given compression ratio constraint. In this paper, we reveal the subop- timality of this conventional method and newly design quantization schemes for neural networks. In particular, we formulate an optimization problem to minimize the network performance loss due to quantization given a compression ratio constraint and ï¬
1612.01543#3
1612.01543#5
1612.01543
[ "1510.03009" ]
1612.01543#5
Towards the Limit of Network Quantization
nd efï¬ cient quantization methods for neural networks. The main contribution of the paper can be summarized as follows: â ¢ It is derived that the performance loss due to quantization in neural networks can be quan- tiï¬ ed approximately by the Hessian-weighted distortion measure. Then, Hessian-weighted k-means clustering is proposed for network quantization to minimize the performance loss. â ¢ It is identiï¬ ed that the optimization problem for network quantization provided a compres- sion ratio constraint can be reduced to an entropy-constrained scalar quantization (ECSQ) problem when optimal variable-length binary coding is employed after quantization. Two efï¬ cient heuristic solutions for ECSQ are proposed for network quantization, i.e., uniform quantization and an iterative solution similar to Lloydâ s algorithm. â ¢ As an alternative of Hessian, it is proposed to utilize some function (e.g., square root) of the second moment estimates of gradients when the Adam (Kingma & Ba, 2014) stochastic gradient descent (SGD) optimizer is used in training. The advantage of using this alterna- tive is that it is computed while training and can be obtained at the end of training at no additional cost.
1612.01543#4
1612.01543#6
1612.01543
[ "1510.03009" ]
1612.01543#6
Towards the Limit of Network Quantization
â ¢ It is shown how the proposed network quantization schemes can be applied for quantizing network parameters of all layers together at once, rather than layer-by-layer network quan- tization in Gong et al. (2014); Han et al. (2015a). This follows from our investigation that Hessian-weighting can handle the different impact of quantization errors properly not only within layers but also across layers. Moreover, quantizing network parameters of all layers together, one can even avoid layer-by-layer compression rate optimization.
1612.01543#5
1612.01543#7
1612.01543
[ "1510.03009" ]
1612.01543#7
Towards the Limit of Network Quantization
The rest of the paper is organized as follows. In Section 2, we deï¬ ne the network quantization prob- lem and review the conventional quantization method using k-means clustering. Section 3 discusses Hessian-weighted network quantization. Our entropy-constrained network quantization schemes follow in Section 4. Finally, experiment results and conclusion can be found in Section 5 and Sec- tion 6, respectively. # 2 NETWORK QUANTIZATION We consider a neural network that is already trained, pruned if employed and ï¬ ne-tuned before quan- tization. If no network pruning is employed, all parameters in a network are subject to quantization. For pruned networks, our focus is on quantization of unpruned parameters. The goal of network quantization is to quantize (unpruned) network parameters in order to reduce the size of the storage for them while minimizing the performance degradation due to quantization. For network quantization, network parameters are grouped into clusters. Parameters in the same cluster share their quantized value, which is the representative value (i.e., cluster center) of the cluster they belong to. After quantization, lossless binary coding follows to encode quantized parameters into binary codewords to store instead of actual parameter values.
1612.01543#6
1612.01543#8
1612.01543
[ "1510.03009" ]
1612.01543#8
Towards the Limit of Network Quantization
Either ï¬ xed-length binary coding or variable-length binary coding, e.g., Huffman coding, can be employed to this end. 2.1 COMPRESSION RATIO Suppose that we have total N parameters in a neural network. Before quantization, each parameter is assumed to be of b bits. For quantization, we partition the network parameters into k clusters. Let Ci be the set of network parameters in cluster i and let bi be the number of bits of the codeword assigned to the network parameters in cluster i for 1 â
1612.01543#7
1612.01543#9
1612.01543
[ "1510.03009" ]
1612.01543#9
Towards the Limit of Network Quantization
¤ i â ¤ k. For a lookup table to decode quantized 2 Published as a conference paper at ICLR 2017 values from their binary encoded codewords, we store k binary codewords (bi bits for 1 â ¤ i â ¤ k) and corresponding quantized values (b bits for each). The compression ratio is then given by Compression ratio = N b k i=1(|Ci| + 1)bi + kb . (1) Observe in (1) that the compression ratio depends not only on the number of clusters but also on the P sizes of the clusters and the lengths of the binary codewords assigned to them, in particular, when a variable-length code is used for encoding quantized values.
1612.01543#8
1612.01543#10
1612.01543
[ "1510.03009" ]
1612.01543#10
Towards the Limit of Network Quantization
For ï¬ xed-length codes, however, all codewords are of the same length, i.e., bi = â log2 kâ for all 1 â ¤ i â ¤ k, and thus the compression ratio is reduced to only a function of the number of clusters, i.e., k, assuming that N and b are given. 2.2 K-MEANS CLUSTERING Provided network parameters {wi}N i=1 to quantize, k-means clustering partitions them into k dis- joint sets (clusters), denoted by C1, C2, . . . , Ck, while minimizing the mean square quantization error (MSQE) as follows: # k argmin C1,C2,...,Ck |w â ci|2, where ci = 1 |Ci| w. (2) # wâ Ci X # wâ Ci X # i=1 X
1612.01543#9
1612.01543#11
1612.01543
[ "1510.03009" ]
1612.01543#11
Towards the Limit of Network Quantization
We observe two issues with employing k-means clustering for network quantization. â ¢ First, although k-means clustering minimizes the MSQE, it does not imply that k-means clustering minimizes the performance loss due to quantization as well in neural networks. K-means clustering treats quantization errors from all network parameters with equal im- portance. However, quantization errors from some network parameters may degrade the performance more signiï¬ cantly that the others. Thus, for minimizing the loss due to quan- tization in neural networks, one needs to take this dissimilarity into account. â ¢ Second, k-means clustering does not consider any compression ratio constraint. It simply minimizes its distortion measure for a given number of clusters, i.e., for k clusters. This is however suboptimal when variable-length coding follows since the compression ratio de- pends not only on the number of clusters but also on the sizes of the clusters and assigned codeword lengths to them, which are determined by the binary coding scheme employed af- ter clustering. Therefore, for the optimization of network quantization given a compression ratio constraint, one need to take the impact of binary coding into account, i.e., we need to solve the quantization problem under the actual compression ratio constraint imposed by the speciï¬ c binary coding scheme employed after clustering. # 3 HESSIAN-WEIGHTED NETWORK QUANTIZATION In this section, we analyze the impact of quantization errors on the neural network loss function and derive that the Hessian-weighted distortion measure is a relevant objective function for network quantization in order to minimize the quantization loss locally. Moreover, from this analysis, we pro- pose Hessian-weighted k-means clustering for network quantization to minimize the performance loss due to quantization in neural networks. 3.1 NETWORK MODEL We consider a general non-linear neural network that yields output y = f (x; w) from input x, where w = [w1 · · · wN ]T is the vector consisting of all trainable network parameters in the network; N is the total number of trainable parameters in the network. A loss function loss(y, Ë y) is deï¬ ned as the objective function that we aim to minimize in average, where Ë y = Ë y(x) is the expected (ground- truth) output for input x.
1612.01543#10
1612.01543#12
1612.01543
[ "1510.03009" ]
1612.01543#12
Towards the Limit of Network Quantization
Cross entropy or mean square error are typical examples of a loss function. Given a training data set Xtrain, we optimize network parameters by solving the following problem, e.g., approximately by using a stochastic gradient descent (SGD) method with mini-batches: Ë w = argmin w L(Xtrain; w), where L(X ; w) = 1 |X | loss(f (x; w), Ë y(x)). # xâ X X 3 Published as a conference paper at ICLR 2017 # 3.2 HESSIAN-WEIGHTED QUANTIZATION ERROR The average loss function L(X ; w) can be expanded by Taylor series with respect to w as follows: δL(X ; w) = g(w)T δw + 1 2 δwT H(w)δw + O(kδwk3), (3) # where where g(w) = â L(X ; w) â w , H(w) = â 2L(X ; w) â w2 ; the square matrix H(w) consisting of second-order partial derivatives is called as Hessian matrix or Hessian. Assume that the loss function has reached to one of its local minima, at w = Ë w, after training. At local minima, gradients are all zero, i.e., we have g( Ë w) = 0, and thus the ï¬ rst term in the right-hand side of (3) can be neglected at w = Ë w. The third term in the right-hand side of (3) is also ignored under the assumption that the average loss function is approximately quadratic at the local minimum w = Ë w. Finally, for simplicity, we approximate the Hessian matrix as a diagonal matrix by setting its off-diagonal terms to be zero. Then, it follows from (3) that N 1 2 hii( Ë w)|δ Ë wi|2, δL(X ; Ë w) â (4) i=1 X where hii( Ë w) is the second-order partial derivative of the average loss function with respect to wi evaluated at w = Ë w, which is the i-th diagonal element of the Hessian matrix H( Ë w). Now, we connect (4) with the problem of network quantization by treating δ Ë
1612.01543#11
1612.01543#13
1612.01543
[ "1510.03009" ]
1612.01543#13
Towards the Limit of Network Quantization
wi as the quantization error of network parameter wi at its local optimum wi = Ë wi, i.e., δ Ë wi = ¯wi â Ë wi, (5) where ¯wi is a quantized value of Ë wi. Finally, combining (4) and (5), we derive that the local impact of quantization on the average loss function at w = Ë w can be quantiï¬ ed approximately as follows: δL(X ; Ë w) â 1 2 N hii( Ë w)| Ë wi â ¯wi|2. (6) # i=1 X At a local minimum, the diagonal elements of Hessian, i.e., hii( Ë w)â s, are all non-negative and thus the summation in (6) is always additive, implying that the average loss function either increases or stays the same. Therefore, the performance degradation due to quantization of a neural network can be measured approximately by the Hessian-weighted distortion as shown in (6). Further discussion on the Hessian-weighted distortion measure can be found in Appendix A.1. # 3.3 HESSIAN-WEIGHTED K-MEANS CLUSTERING For notational simplicity, we use wi â ¡ Ë wi and hii â ¡ hii( Ë w) from now on.
1612.01543#12
1612.01543#14
1612.01543
[ "1510.03009" ]
1612.01543#14
Towards the Limit of Network Quantization
The optimal clustering that minimizes the Hessian-weighted distortion measure is given by argmin C1,C2,...,Ck k hii|wi â cj|2, where cj = wiâ Cj hiiwi wiâ Cj hii P . (7) # wiâ Cj X # j=1 X # P We call this as Hessian-weighted k-means clustering. Observe in (7) that we give a larger penalty for a network parameter in deï¬ ning the distortion measure for clustering when its second-order partial derivative is larger, in order to avoid a large deviation from its original value, since the impact on the loss function due to quantization is expected to be larger for that parameter. Hessian-weighted k-means clustering is locally optimal in minimizing the quantization loss when ï¬ xed-length binary coding follows, where the compression ratio solely depends on the number of clusters as shown in Section 2.1. Similar to the conventional k-means clustering, solving this op- timization is not easy, but Lloydâ s algorithm is still applicable as an efï¬ cient heuristic solution for this problem if Hessian-weighted means are used as cluster centers instead of non-weighted regular means.
1612.01543#13
1612.01543#15
1612.01543
[ "1510.03009" ]
1612.01543#15
Towards the Limit of Network Quantization
4 Published as a conference paper at ICLR 2017 3.4 HESSIAN COMPUTATION For obtaining Hessian, one needs to evaluate the second-order partial derivative of the average loss function with respect to each of network parameters, i.e., we need to calculate â 2L(X ; w) â w2 i â 2 â w2 i 1 |X | hii( Ë w) = = . w= Ë w w= Ë w (8) ° loss(f (x; w), Â¥(x)) Hessian. An efficient & Le Cun] # xâ X X
1612.01543#14
1612.01543#16
1612.01543
[ "1510.03009" ]
1612.01543#16
Towards the Limit of Network Quantization
Recall that we are interested in only the diagonal elements of Hessian. An efï¬ cient way of computing the diagonal of Hessian is presented in Le Cun (1987); Becker & Le Cun (1988) and it is based on the back propagation method that is similar to the back propagation algorithm used for computing ï¬ rst-order partial derivatives (gradients). That is, computing the diagonal of Hessian is of the same order of complexity as computing gradients. Hessian computation and our network quantization are performed after completing network training. For the data set X used to compute Hessian in (8), we can either reuse a training data set or use some other data set, e.g., validation data set. We observed from our experiments that even using a small subset of the training or validation data set is sufï¬ cient to yield good approximation of Hessian for network quantization. 3.5 ALTERNATIVE OF HESSIAN
1612.01543#15
1612.01543#17
1612.01543
[ "1510.03009" ]
1612.01543#17
Towards the Limit of Network Quantization
Although there is an efï¬ cient way to obtain the diagonal of Hessian as discussed in the previous sub- section, Hessian computation is not free. In order to avoid this additional Hessian computation, we propose to use an alternative metric instead of Hessian. In particular, we consider neural networks trained with the Adam SGD optimizer (Kingma & Ba, 2014) and propose to use some function (e.g., square root) of the second moment estimates of gradients as an alternative of Hessian. The Adam algorithm computes adaptive learning rates for individual network parameters from the ï¬ rst and second moment estimates of gradients. We compare the Adam method to Newtonâ s op- timization method using Hessian and notice that the second moment estimates of gradients in the Adam method act like the Hessian in Newtonâ s method. This observation leads us to use some func- tion (e.g., square root) of the second moment estimates of gradients as an alternative of Hessian. The advantage of using the second moment estimates from the Adam method is that they are com- puted while training and we can obtain them at the end of training at no additional cost. It makes Hessian-weighting more feasible for deep neural networks, which have millions of parameters. We note that similar quantities can be found and used for other SGD optimization methods using adaptive learning rates, e.g., AdaGrad (Duchi et al., 2011), Adadelta (Zeiler, 2012) and RMSProp (Tieleman & Hinton, 2012). 3.6 QUANTIZATION OF ALL LAYERS We propose quantizing the network parameters of all layers in a neural network together at once by taking Hessian-weight into account. Layer-by-layer quantization was examined in the previous work (Gong et al., 2014; Han et al., 2015a). However, e.g., in Han et al. (2015a), a larger number of bits (a larger number of clusters) are assigned to convolutional layers than fully-connected layers, which implies that they heuristically treat convolutional layers more importantly. This follows from the fact that the impact of quantization errors on the performance varies signiï¬ cantly across layers; some layers, e.g., convolutional layers, may be more important than the others. This concern is exactly what we can address by Hessian-weighting.
1612.01543#16
1612.01543#18
1612.01543
[ "1510.03009" ]
1612.01543#18
Towards the Limit of Network Quantization
Hessian-weighting properly handles the different impact of quantization errors not only within layers but also across layers and thus it can be employed for quantizing all layers of a network together. The impact of quantization errors may vary more substantially across layers than within layers. Thus, Hessian-weighting may show more beneï¬ t in deeper neural networks. We note that Hessian- weighting can still provide gain even for layer-by-layer quantization since it can address the different impact of the quantization errors of network parameters within each layer as well. Recent neural networks are getting deeper, e.g., see Szegedy et al. (2015a;b); He et al. (2015). For such deep neural networks, quantizing network parameters of all layers together is even more advan- tageous since we can avoid layer-by-layer compression rate optimization. Optimizing compression
1612.01543#17
1612.01543#19
1612.01543
[ "1510.03009" ]
1612.01543#19
Towards the Limit of Network Quantization
5 Published as a conference paper at ICLR 2017 ratios jointly across all individual layers (to maximize the overall compression ratio for a network) requires exponential time complexity with respect to the number of layers. This is because the total number of possible combinations of compression ratios for individual layers increases exponentially as the number of layers increases. # 4 ENTROPY-CONSTRAINED NETWORK QUANTIZATION In this section, we investigate how to solve the network quantization problem under a constraint on the compression ratio. In designing network quantization schemes, we not only want to minimize the performance loss but also want to maximize the compression ratio. In Section 3, we explored how to quantify and minimize the loss due to quantization. In this section, we investigate how to take the compression ratio into account properly in the optimization of network quantization. 4.1 ENTROPY CODING After quantizing network parameters by clustering, lossless data compression by variable-length bi- nary coding can be followed for compressing quantized values. There is a set of optimal codes that achieve the minimum average codeword length for a given source. Entropy is the theoretical limit of the average codeword length per symbol that we can achieve by lossless data compression, proved by Shannon (see, e.g., Cover & Thomas (2012, Section 5.3)). It is known that optimal codes achieve this limit with some overhead less than 1 bit when only integer-length codewords are allowed. So optimal coding is also called as entropy coding. Huffman coding is one of entropy coding schemes commonly used when the source distribution is provided (see, e.g., Cover & Thomas (2012, Sec- tion 5.6)), or can be estimated. 4.2 ENTROPY-CONSTRAINED SCALAR QUANTIZATION (ECSQ) Considering a compression ratio constraint in network quantization, we need to solve the clustering problem in (2) or (7) under the compression ratio constraint given by # k b 1 N > C, where ¯b = Compression ratio = |Ci|bi, k i=1 bi + kb)/N ¯b + ( (9) # i=1 X which follows from (1). This optimization problem is too complex to solve for any arbitrary variable- length binary code since the average codeword length ¯b can be arbitrary. However, we identify that it can be simpliï¬
1612.01543#18
1612.01543#20
1612.01543
[ "1510.03009" ]
1612.01543#20
Towards the Limit of Network Quantization
ed if optimal codes, e.g., Huffman codes, are assumed to be used. In particular, optimal coding closely achieves the lower limit of the average source code length, i.e., entropy, and then we approximately have # k ¯b â H = â pi log2 pi, (10) i=1 X where H is the entropy of the quantized network parameters after clustering (i.e., source), given that pi = |Ci|/N is the ratio of the number of network parameters in cluster Ci to the number of all network parameters (i.e., source distribution). Moreover, assuming that N â « k, we have k 1 N bi + kb â 0, ! (11) # i=1 X in (9). From (10) and (11), the constraint in (9) can be altered to an entropy constraint given by k H = â pi log2 pi < R, # i=1 X where R â b/C.
1612.01543#19
1612.01543#21
1612.01543
[ "1510.03009" ]
1612.01543#21
Towards the Limit of Network Quantization
In summary, assuming that optimal coding is employed after clustering, one can approximately replace a compression ratio constraint with an entropy constraint for the clustering output. The network quantization problem is then translated into a quantization problem with an en- tropy constraint, which is called as entropy-constrained scalar quantization (ECSQ) in information theory. Two efï¬ cient heuristic solutions for ECSQ are proposed for network quantization in the fol- lowing subsections, i.e., uniform quantization and an iterative solution similar to Lloydâ s algorithm for k-means clustering.
1612.01543#20
1612.01543#22
1612.01543
[ "1510.03009" ]
1612.01543#22
Towards the Limit of Network Quantization
6 Published as a conference paper at ICLR 2017 4.3 UNIFORM QUANTIZATION It is shown in Gish & Pierce (1968) that the uniform quantizer is asymptotically optimal in mini- mizing the mean square quantization error for any random source with a reasonably smooth density function as the resolution becomes inï¬ nite, i.e., as the number of clusters k â â . This asymptotic result leads us to come up with a very simple but efï¬ cient network quantization scheme as follows: 1. We ï¬ rst set uniformly spaced thresholds and divide network parameters into clusters. 2. After determining clusters, their quantized values (cluster centers) are obtained by taking the mean of network parameters in each cluster. Note that one can use Hessian-weighted mean instead of non-weighted mean in computing clus- ter centers in the second step above in order to take the beneï¬ t of Hessian-weighting. A perfor- mance comparison of uniform quantization with non-weighted mean and uniform quantization with Hessian-weighted mean can be found in Appendix A.2. Although uniform quantization is a straightforward method, it has never been shown before in the literature that it is actually one of the most efï¬ cient quantization schemes for neural networks when optimal variable-length coding, e.g., Huffman coding, follows. We note that uniform quantization is not always good; it is inefï¬ cient for ï¬ xed-length coding, which is also ï¬ rst shown in this paper. 4.4 # ITERATIVE ALGORITHM TO SOLVE ECSQ Another scheme proposed to solve the ECSQ problem for network quantization is an iterative algo- rithm, which is similar to Lloydâ s algorithm for k-means clustering. Although this iterative solution is more complicated than the uniform quantization in Section 4.3, it ï¬ nds a local optimum for a given discrete source. An iterative algorithm to solve the general ECSQ problem is provided in Chou et al. (1989). We derive a similar iterative algorithm to solve the ECSQ problem for network quantization. The main difference from the method in Chou et al. (1989) is that we minimize the Hessian-weighted distortion measure instead of the non-weighted regular distortion measure for op- timal quantization. The detailed algorithm and further discussion can be found in Appendix A.3.
1612.01543#21
1612.01543#23
1612.01543
[ "1510.03009" ]
1612.01543#23
Towards the Limit of Network Quantization
# 5 EXPERIMENTS This section presents our experiment results for the proposed network quantization schemes in three exemplary convolutional neural networks: (a) LeNet (LeCun et al., 1998) for the MNIST data set, (b) ResNet (He et al., 2015) for the CIFAR-10 data set, and (c) AlexNet (Krizhevsky et al., 2012) for the ImageNet ILSVRC-2012 data set. Our experiments can be summarized as follows:
1612.01543#22
1612.01543#24
1612.01543
[ "1510.03009" ]
1612.01543#24
Towards the Limit of Network Quantization
â ¢ We employ the proposed network quantization methods to quantize all of network param- eters in a network together at once, as discussed in Section 3.6. We evaluate the performance of the proposed network quantization methods with and with- out network pruning. For a pruned model, we need to store not only the values of unpruned parameters but also their respective indexes (locations) in the original model. For the index information, we compute index differences between unpruned network parameters in the original model and further compress them by Huffman coding as in Han et al. (2015a). â ¢ For Hessian computation, 50,000 samples of the training set are reused. We also evaluate the performance when Hessian is computed with 1,000 samples only. â ¢ Finally, we evaluate the performance of our network quantization schemes using Hessian when its alternative is used instead, as discussed in Section 3.5. To this end, we retrain the considered neural networks with the Adam SGD optimizer and obtain the second moment estimates of gradients at the end of training. Then, we use the square roots of the second moment estimates instead of Hessian and evaluate the performance. # 5.1 EXPERIMENT MODELS First, we evaluate our network quantization schemes for the MNIST data set with a simpliï¬ ed ver- sion of LeNet5 (LeCun et al., 1998), consisting of two convolutional layers and two fully-connected
1612.01543#23
1612.01543#25
1612.01543
[ "1510.03009" ]
1612.01543#25
Towards the Limit of Network Quantization
7 Published as a conference paper at ICLR 2017 100 100 100 100 90 90 80 80 ) % ( 70 60 ) % ( 70 60 y c a r u c c A 50 40 30 y c a r u c c A 50 40 30 20 10 0 0 1 kâ
1612.01543#24
1612.01543#26
1612.01543
[ "1510.03009" ]
1612.01543#26
Towards the Limit of Network Quantization
means Hessianâ weighted kâ means Uniform quantization Iterative ECSQ 3 2 7 Codeword length (bits) 4 5 6 8 9 20 10 0 0 1 kâ means Hessianâ weighted kâ means Uniform quantization Iterative ECSQ 3 2 7 Codeword length (bits) 4 5 6 8 (a) Fixed-length coding (b) Fixed-length coding + ï¬ ne-tuning 100 100 90 90 80 80 ) % ( 70 60 ) % ( 70 60 y c a r u c c A 50 40 30 y c a r u c c A 50 40 30 20 10 0 0 kâ
1612.01543#25
1612.01543#27
1612.01543
[ "1510.03009" ]
1612.01543#27
Towards the Limit of Network Quantization
means Hessianâ weighted kâ means Uniform quantization Iterative ECSQ 3 1 8 Average codeword length (bits) 2 4 5 6 7 (c) Huffman coding 9 kâ means Hessianâ weighted kâ means Uniform quantization Iterative ECSQ 3 20 10 0 0 8 1 Average codeword length (bits) (d) Huffman coding + ï¬ ne-tuning 2 4 5 6 7 9 9 Figure 1: Accuracy versus average codeword length per network parameter after network quantiza- tion for 32-layer ResNet. layers followed by a soft-max layer. It has total 431,080 parameters and achieves 99.25% accuracy. For a pruned model, we prune 91% of the original network parameters and ï¬ ne-tune the rest. Second, we experiment our network quantization schemes for the CIFAR-10 data set (Krizhevsky, 2009) with a pre-trained 32-layer ResNet (He et al., 2015). The 32-layer ResNet consists of 464,154 parameters in total and achieves 92.58% accuracy. For a pruned model, we prune 80% of the original network parameters and ï¬
1612.01543#26
1612.01543#28
1612.01543
[ "1510.03009" ]
1612.01543#28
Towards the Limit of Network Quantization
ne-tune the rest. Third, we evaluate our network quantization schemes with AlexNet (Krizhevsky et al., 2012) for the ImageNet ILSVRC-2012 data set (Russakovsky et al., 2015). We obtain a pre-trained AlexNet Caffe model, which achieves 57.16% top-1 accuracy. For a pruned model, we prune 89% parameters and ï¬ ne-tune the rest. In ï¬ ne-tuning, the Adam SGD optimizer is used in order to avoid the computation of Hessian by utilizing its alternative (see Section 3.5). However, the pruned model does not recover the original accuracy after ï¬ ne-tuning with the Adam method; the top-1 accuracy recovered after pruning and ï¬ ne-tuning is 56.00%. We are able to ï¬ nd a better pruned model achieving the original accuracy by pruning and retraining iteratively (Han et al., 2015b), which is however not used here. 5.2 EXPERIMENT RESULTS
1612.01543#27
1612.01543#29
1612.01543
[ "1510.03009" ]
1612.01543#29
Towards the Limit of Network Quantization
We ï¬ rst present the quantization results without pruning for 32-layer ResNet in Figure 1, where the accuracy of 32-layer ResNet is plotted against the average codeword length per network pa- rameter after quantization. When ï¬ xed-length coding is employed, the proposed Hessian-weighted k-means clustering method performs the best, as expected. Observe that Hessian-weighted k-means clustering yields better accuracy than others even after ï¬ ne-tuning. On the other hand, when Huff- man coding is employed, uniform quantization and the iterative algorithm for ECSQ outperform Hessian-weighted k-means clustering and k-means clustering. However, these two ECSQ solutions underperform Hessian-weighted k-means clustering and even k-means clustering when ï¬ xed-length coding is employed since they are optimized for optimal variable-length coding.
1612.01543#28
1612.01543#30
1612.01543
[ "1510.03009" ]
1612.01543#30
Towards the Limit of Network Quantization
8 Published as a conference paper at ICLR 2017 100 100 99.5 90 99 80 ) % ( y c a r u c c A 98.5 98 97.5 97 96.5 ) % ( y c a r u c c A 70 60 50 40 30 96 95.5 95 0 kâ
1612.01543#29
1612.01543#31
1612.01543
[ "1510.03009" ]
1612.01543#31
Towards the Limit of Network Quantization
means Hessianâ weighted kâ means (50,000) Hessianâ weighted kâ means (1,000) Altâ Hessianâ weighted kâ means 1 2 3 4 5 Average codeword length (bits) 6 7 20 10 0 0 kâ means Hessianâ weighted kâ means (50,000) Hessianâ weighted kâ means (1,000) Altâ Hessianâ weighted kâ means 1 8 Average codeword length (bits) 2 3 4 5 6 7 (a) LeNet (b) ResNet 9 Figure 2: Accuracy versus average codeword length per network parameter after network quanti- zation, Huffman coding and ï¬ ne-tuning for LeNet and 32-layer ResNet when Hessian is computed with 50,000 or 1,000 samples and when the square roots of the second moment estimates of gradients are used instead of Hessian as an alternative. Figure 2 shows the performance of Hessian-weighted k-means clustering when Hessian is computed with a small number of samples (1,000 samples). Observe that even using the Hessian computed with a small number of samples yields almost the same performance. We also show the performance of Hessian-weighted k-means clustering when an alternative of Hessian is used instead of Hessian as explained in Section 3.5. In particular, the square roots of the second moment estimates of gradients are used instead of Hessian, and using this alternative provides similar performance to using Hessian. In Table 1, we summarize the compression ratios that we can achieve with different network quanti- zation methods for pruned models. The original network parameters are 32-bit ï¬ oat numbers. Using the simple uniform quantization followed by Huffman coding, we achieve the compression ratios of 51.25, 22.17 and 40.65 (i.e., the compressed model sizes are 1.95%, 4.51% and 2.46% of the original model sizes) for LeNet, 32-layer ResNet and AlexNet, respectively, at no or marginal per- formance loss.
1612.01543#30
1612.01543#32
1612.01543
[ "1510.03009" ]
1612.01543#32
Towards the Limit of Network Quantization
Observe that the loss in the compressed AlexNet is mainly due to pruning. Here, we also compare our network quantization results to the ones in Han et al. (2015a). Note that layer-by- layer quantization with k-means clustering is evaluated in Han et al. (2015a) while our quantization schemes including k-means clustering are employed to quantize network parameters of all layers together at once (see Section 3.6). # 6 CONCLUSION This paper investigates the quantization problem of network parameters in deep neural networks. We identify the suboptimality of the conventional quantization method using k-means clustering and newly design network quantization schemes so that they can minimize the performance loss due to quantization given a compression ratio constraint. In particular, we analytically show that Hessian can be used as a measure of the importance of network parameters and propose to minimize Hessian- weighted quantization errors in average for clustering network parameters to quantize. Hessian- weighting is beneï¬ cial in quantizing all of the network parameters together at once since it can handle the different impact of quantization errors properly not only within layers but also across layers. Furthermore, we make a connection from the network quantization problem to the entropy- constrained data compression problem in information theory and push the compression ratio to the limit that information theory provides.
1612.01543#31
1612.01543#33
1612.01543
[ "1510.03009" ]
1612.01543#33
Towards the Limit of Network Quantization
Two efï¬ cient heuristic solutions are presented to this end, i.e., uniform quantization and an iterative solution for ECSQ. Our experiment results show that the proposed network quantization schemes provide considerable gain over the conventional method using k-means clustering, in particular for large and deep neural networks. # REFERENCES Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Fixed point optimization of deep convolutional neural networks for object recognition. In IEEE International Conference on Acoustics, Speech
1612.01543#32
1612.01543#34
1612.01543
[ "1510.03009" ]
1612.01543#34
Towards the Limit of Network Quantization
9 Published as a conference paper at ICLR 2017 Table 1: Summary of network quantization results with Huffman coding for pruned models. Accuracy % Compression ratio - 10.13 44.58 47.16 51.25 49.01 39.00 - 4.52 18.25 20.51 22.17 21.01 N/A - 7.91 30.53 33.71 40.65 35.00 99.25 99.27 99.27 99.27 99.28 99.27 99.26 92.58 92.58 92.64 92.67 92.68 92.73 N/A 57.16 56.00 56.12 56.04 56.20 57.22 Original model Pruned model k-means Hessian-weighted k-means Uniform quantization Iterative ECSQ Pruning + Quantization all layers + Huffman coding LeNet Deep compression (Han et al., 2015a) Original model Pruned model k-means Hessian-weighted k-means Uniform quantization Iterative ECSQ Pruning + Quantization all layers + Huffman coding ResNet Deep compression (Han et al., 2015a) Original model Pruned model Pruning + Quantization all layers + Huffman coding Deep compression (Han et al., 2015a) k-means Alt-Hessian-weighted k-means Uniform quantization AlexNet and Signal Processing, pp. 1131â 1135, 2015.
1612.01543#33
1612.01543#35
1612.01543
[ "1510.03009" ]
1612.01543#35
Towards the Limit of Network Quantization
Sue Becker and Yann Le Cun. Improving the convergence of back-propagation learning with second In Proceedings of the Connectionist Models Summer School, pp. 29â 37. San order methods. Matteo, CA: Morgan Kaufmann, 1988. Philip A Chou, Tom Lookabaugh, and Robert M Gray. Entropy-constrained vector quantization. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(1):31â 42, 1989. Matthieu Courbariaux, Jean-Pierre David, and Yoshua Bengio. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024, 2014. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David.
1612.01543#34
1612.01543#36
1612.01543
[ "1510.03009" ]
1612.01543#36
Towards the Limit of Network Quantization
Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pp. 3123â 3131, 2015. Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121â 2159, 2011.
1612.01543#35
1612.01543#37
1612.01543
[ "1510.03009" ]
1612.01543#37
Towards the Limit of Network Quantization
Herbert Gish and John Pierce. Asymptotically efï¬ cient quantizing. IEEE Transactions on Informa- tion Theory, 14(5):676â 683, 1968. Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net- works using vector quantization. arXiv preprint arXiv:1412.6115, 2014. Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1737â 1746, 2015. Song Han, Huizi Mao, and William J Dally.
1612.01543#36
1612.01543#38
1612.01543
[ "1510.03009" ]
1612.01543#38
Towards the Limit of Network Quantization
Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a. 10 Published as a conference paper at ICLR 2017 Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efï¬ cient neural network. In Advances in Neural Information Processing Systems, pp. 1135â 1143, 2015b.
1612.01543#37
1612.01543#39
1612.01543
[ "1510.03009" ]
1612.01543#39
Towards the Limit of Network Quantization
Babak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems, pp. 164â 171, 1993. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. arXiv preprint arXiv:1512.03385, 2015. Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. In Proceedings of the British Machine Vision Conference, 2014.
1612.01543#38
1612.01543#40
1612.01543
[ "1510.03009" ]
1612.01543#40
Towards the Limit of Network Quantization
Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. Com- pression of deep convolutional neural networks for fast and low power mobile applications. arXiv preprint arXiv:1511.06530, 2015. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬ cation with deep convo- lutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097â 1105, 2012. Yann Le Cun. Mod`eles connexionnistes de lâ apprentissage. PhD thesis, Paris 6, 1987.
1612.01543#39
1612.01543#41
1612.01543
[ "1510.03009" ]
1612.01543#41
Towards the Limit of Network Quantization
Vadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2554â 2564, 2016. Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempitsky. Speeding-up convolutional neural networks using ï¬ ne-tuned CP-decomposition. arXiv preprint arXiv:1412.6553, 2014. Yann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain damage. In Advances in Neural Information Processing Systems, pp. 598â 605, 1989. Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â 2324, 1998. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning.
1612.01543#40
1612.01543#42
1612.01543
[ "1510.03009" ]
1612.01543#42
Towards the Limit of Network Quantization
Nature, 521(7553):436â 444, 2015. Darryl D Lin, Sachin S Talathi, and V Sreekanth Annapureddy. Fixed point quantization of deep convolutional networks. arXiv preprint arXiv:1511.06393, 2015a. Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015b. Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolu- tional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 806â 814, 2015.
1612.01543#41
1612.01543#43
1612.01543
[ "1510.03009" ]
1612.01543#43
Towards the Limit of Network Quantization
Michael C Mozer and Paul Smolensky. Skeletonization: A technique for trimming the fat from a network via relevance assessment. In Advances in Neural Information Processing Systems, pp. 107â 115, 1989. Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov. Tensorizing neural networks. In Advances in Neural Information Processing Systems, pp. 442â 450, 2015. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi.
1612.01543#42
1612.01543#44
1612.01543
[ "1510.03009" ]
1612.01543#44
Towards the Limit of Network Quantization
XNOR-Net: Imagenet classiï¬ cation using binary convolutional neural networks. arXiv preprint arXiv:1603.05279, 2016. 11 Published as a conference paper at ICLR 2017 Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â 252, 2015. Tara N Sainath, Brian Kingsbury, Vikas Sindhwani, Ebru Arisoy, and Bhuvana Ramabhadran.
1612.01543#43
1612.01543#45
1612.01543
[ "1510.03009" ]
1612.01543#45
Towards the Limit of Network Quantization
Low- rank matrix factorization for deep neural network training with high-dimensional output targets. In IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6655â 6659, 2013. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â 9, 2015a. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re- thinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015b. Cheng Tai, Tong Xiao, Xiaogang Wang, et al. Convolutional neural networks with low-rank regu- larization. arXiv preprint arXiv:1511.06067, 2015. Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 2012.
1612.01543#44
1612.01543#46
1612.01543
[ "1510.03009" ]
1612.01543#46
Towards the Limit of Network Quantization
Vincent Vanhoucke, Andrew Senior, and Mark Z Mao. Improving the speed of neural networks on CPUs. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS, 2011. Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pp. 2074â 2082, 2016. Jian Xue, Jinyu Li, and Yifan Gong. Restructuring of deep neural network acoustic models with singular value decomposition. In INTERSPEECH, pp. 2365â 2369, 2013.
1612.01543#45
1612.01543#47
1612.01543
[ "1510.03009" ]
1612.01543#47
Towards the Limit of Network Quantization
Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep fried convnets. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1476â 1483, 2015. Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. 12 Published as a conference paper at ICLR 2017 # A APPENDIX A.1 FURTHER DISCUSSION ON THE HESSIAN-WEIGHTED QUANTIZATION ERROR The diagonal approximation for Hessian simpliï¬ es the optimization problem as well as its solution for network quantization. This simpliï¬ cation comes with some performance loss. We conjecture that the loss due to this approximation is small. The reason is that the contributions from off-diagonal terms are not always additive and their summation may end up with a small value. However, diagonal terms are all non-negative and therefore their contributions are always additive. We do not verify this conjecture in this paper since solving the problem without diagonal approximation is too complex; we even need to compute the whole Hessian matrix, which is also too costly. Observe that the relation of the Hessian-weighted distortion measure to the quantization loss holds for any model for which the objective function can be approximated as a quadratic function with respect to the parameters to quantize in the model. Hence, the quantization methods proposed in this paper to minimize the Hessian-weighted distortion measure are not speciï¬ c to neural networks but are generally applicable to quantization of parameters of any model whose objective function is locally quadratic with respect to its parameters approximately. Finally, we do not consider the interactions between quantization and retraining in our formulation in Section 3.2. We analyze the expected loss due to quantization assuming no further retraining and focus on ï¬ nding optimal network quantization schemes that minimize the performance loss. In our experiments, however, we further ï¬ ne-tune the quantized values (cluster centers) so that we can recover the loss due to quantization and improve the performance. A.2 EXPERIMENT RESULTS FOR UNIFORM QUANTIZATION
1612.01543#46
1612.01543#48
1612.01543
[ "1510.03009" ]
1612.01543#48
Towards the Limit of Network Quantization
We compare uniform quantization with non-weighted mean and uniform quantization with Hessian- weighted mean in Figure 3, which shows that uniform quantization with Hessian-weighted mean slightly outperforms uniform quantization with non-weighted mean. 100 100 90 90 80 80 ) % ( 70 60 ) % ( 70 60 y c a r u c c A 50 40 30 y c a r u c c A 50 40 30 20 20 10 0 0 Uniform with nonâ weighted mean Uniform with Hessianâ weighted mean 2 Average codeword length (bits) 1 3 (a) Huffman coding 4 Uniform with nonâ weighted mean Uniform with Hessianâ weighted mean 10 0 0 2 Average codeword length (bits) (b) Huffman coding + ï¬ ne-tuning 1 3 4 Figure 3: Accuracy versus average codeword length per network parameter after network quanti- zation, Huffman coding and ï¬ ne-tuning for 32-layer ResNet when uniform quantization with non- weighted mean and uniform quantization with Hessian-weighted mean are used. # A.3 FURTHER DISCUSSION ON THE ITERATIVE ALGORITHM FOR ECSQ In order to solve the ECSQ problem for network quantization, we deï¬ ne a Lagrangian cost function: Jλ(C1, C2, . . . , Ck) = D + λH = 1 N k j=1 X wiâ Cj X (hii|wi â cj|2 â λ log2 pj) =dλ(i,j) , (12) where | # {z D = 1 N k hii|wi â cj|2, H = â k pj log2 pj. # wiâ Cj X # j=1 X # j=1 X } 13
1612.01543#47
1612.01543#49
1612.01543
[ "1510.03009" ]
1612.01543#49
Towards the Limit of Network Quantization
Published as a conference paper at ICLR 2017 Algorithm 1 Iterative solution for entropy-constrained network quantization # Initialization: n â 0 Initialize the centers of k clusters: c(0) 1 , . . . , c(0) Initialize the proportions of k clusters (set all of them to be the same initially): p(0) k 1 , . . . , p(0) k Assignment: for all network parameters i = 1 â N do Assign wi to the cluster j that minimizes the individual Lagrangian cost as follows: end for C(n+1) l â C(n+1) l â ª {wi} for l = argmin j hii|wi â c(n) n j |2 â λ log2 p(n) j o Update: for all clusters j = 1 â k do Update the cluster center and the proportion of cluster j: c(n+1) j â
1612.01543#48
1612.01543#50
1612.01543
[ "1510.03009" ]
1612.01543#50
Towards the Limit of Network Quantization
wiâ C(n+1) j hiiwi P wiâ C(n+1) j hii and p(n+1) j â |C(n+1) j N | end for n â n + 1 P # repeat until Lagrangian cost function Jλ decreases less than some threshold The entropy-constrained network quantization problem is then reduced to ï¬ nd k partitions (clusters) C1, C2, . . . , Ck that minimize the Lagrangian cost function as follows: argmin C1,C2,...,Ck Jλ(C1, C2, . . . , Ck). A heuristic iterative algorithm to solve this method of Lagrange multipliers for network quantization is presented in Algorithm 1.
1612.01543#49
1612.01543#51
1612.01543
[ "1510.03009" ]
1612.01543#51
Towards the Limit of Network Quantization
It is similar to Lloydâ s algorithm for k-means clustering. The key difference is how to partition network parameters at the assignment step. In Lloydâ s algorithm, the Euclidean distance (quantization error) is minimized. For ECSQ, the individual Lagrangian cost function, i.e., dλ(i, j) in (12), is minimized instead, which includes both quantization error and expected codeword length after entropy coding. 14
1612.01543#50
1612.01543
[ "1510.03009" ]
1612.01064#0
Trained Ternary Quantization
7 1 0 2 b e F 3 2 ] G L . s c [ 3 v 4 6 0 1 0 . 2 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # TRAINED TERNARY QUANTIZATION Chenzhuo Zhuâ Tsinghua University [email protected] Song Han Stanford University [email protected] Huizi Mao Stanford University [email protected]
1612.01064#1
1612.01064
[ "1502.03167" ]
1612.01064#1
Trained Ternary Quantization
William J. Dally Stanford University NVIDIA [email protected] # ABSTRACT Deep neural networks are widely used in machine learning applications. However, the deployment of large neural networks models can be difï¬ cult to deploy on mobile devices with limited power budgets. To solve this problem, we propose Trained Ternary Quantization (TTQ), a method that can reduce the precision of weights in neural networks to ternary values. This method has very little accuracy degradation and can even improve the accuracy of some models (32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet model is trained from scratch, which means itâ s as easy as to train normal full precision model. We highlight our trained quantization method that can learn both ternary values and ternary assignment. During inference, only ternary values (2-bit weights) and scaling factors are needed, therefore our models are nearly 16à smaller than full- precision models. Our ternary models can also be viewed as sparse binary weight networks, which can potentially be accelerated with custom circuit. Experiments on CIFAR-10 show that the ternary models obtained by trained quantization method outperform full-precision models of ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and outperforms previous ternary models by 3%.
1612.01064#0
1612.01064#2
1612.01064
[ "1502.03167" ]
1612.01064#2
Trained Ternary Quantization
# INTRODUCTION Deep neural networks are becoming the preferred approach for many machine learning applications. However, as networks get deeper, deploying a network with a large number of parameters on a small device becomes increasingly difï¬ cult. Much work has been done to reduce the size of networks. Half- precision networks (Amodei et al., 2015) cut sizes of neural networks in half. XNOR-Net (Rastegari et al., 2016), DoReFa-Net (Zhou et al., 2016) and network binarization (Courbariaux et al.; 2015; Lin et al., 2015) use aggressively quantized weights, activations and gradients to further reduce computation during training. While weight binarization beneï¬
1612.01064#1
1612.01064#3
1612.01064
[ "1502.03167" ]
1612.01064#3
Trained Ternary Quantization
ts from 32à smaller model size, the extreme compression rate comes with a loss of accuracy. Hubara et al. (2016) and Li & Liu (2016) propose ternary weight networks to trade off between model size and accuracy. In this paper, we propose Trained Ternary Quantization which uses two full-precision scaling coefï¬ cients W p for each layer l, and quantize the weights to {â W n l } instead of traditional {-1, 0, +1} or {-E, 0, +E} where E is the mean of the absolute weight value, which is not learned. Our positive and negative weights have different absolute values W p that are trainable parameters. We also maintain latent full-precision weights at training time, and discard them at test time. We back propagate the gradient to both W p l and to the latent full-precision weights. This makes it possible to adjust the ternary assignment (i.e. which of the three values a weight is assigned). Our quantization method, achieves higher accuracy on the CIFAR-10 and ImageNet datasets. For AlexNet on ImageNet dataset, our method outperforms previously state-of-art ternary network(Li &
1612.01064#2
1612.01064#4
1612.01064
[ "1502.03167" ]
1612.01064#4
Trained Ternary Quantization
â Work done while at Stanford CVA lab. 1 Published as a conference paper at ICLR 2017 Liu, 2016) by 3.0% of Top-1 accuracy and the full-precision model by 1.6%. By converting most of the parameters to 2-bit values, we also compress the network by about 16x. Moreover, the advantage of few multiplications still remains, because W p l are ï¬ xed for each layer during inference. On custom hardware, multiplications can be pre-computed on activations, so only two multiplications per activation are required.
1612.01064#3
1612.01064#5
1612.01064
[ "1502.03167" ]
1612.01064#5
Trained Ternary Quantization
# 2 MOTIVATIONS The potential of deep neural networks, once deployed to mobile devices, has the advantage of lower latency, no reliance on the network, and better user privacy. However, energy efï¬ ciency becomes the bottleneck for deploying deep neural networks on mobile devices because mobile devices are battery constrained. Current deep neural network models consist of hundreds of millions of parameters. Reducing the size of a DNN model makes the deployment on edge devices easier. First, a smaller model means less overhead when exporting models to clients. Take autonomous driving for example; Tesla periodically copies new models from their servers to customersâ cars. Smaller models require less communication in such over-the-air updates, making frequent updates more feasible. Another example is on Apple Store; apps above 100 MB will not download until you connect to Wi-Fi.
1612.01064#4
1612.01064#6
1612.01064
[ "1502.03167" ]
1612.01064#6
Trained Ternary Quantization
Itâ s infeasible to put a large DNN model in an app. The second issue is energy consumption. Deep learning is energy consuming, which is problematic for battery-constrained mobile devices. As a result, iOS 10 requires iPhone to be plugged with charger while performing photo analysis. Fetching DNN models from memory takes more than two orders of magnitude more energy than arithmetic operations. Smaller neural networks require less memory bandwidth to fetch the model, saving the energy and extending battery life. The third issue is area cost. When deploying DNNs on Application-Speciï¬ c Integrated Circuits (ASICs), a sufï¬ ciently small model can be stored directly on-chip, and smaller models enable a smaller ASIC die. Several previous works aimed to improve energy and spatial efï¬ ciency of deep networks. One common strategy proven useful is to quantize 32-bit weights to one or two bits, which greatly reduces model size and saves memory reference. However, experimental results show that compressed weights usually come with degraded performance, which is a great loss for some performance- sensitive applications. The contradiction between compression and performance motivates us to work on trained ternary quantization, minimizing performance degradation of deep neural networks while saving as much energy and space as possible. # 3 RELATED WORK 3.1 BINARY NEURAL NETWORK (BNN) Lin et al. (2015) proposed binary and ternary connections to compress neural networks and speed up computation during inference. They used similar probabilistic methods to convert 32-bit weights into binary values or ternary values, deï¬
1612.01064#5
1612.01064#7
1612.01064
[ "1502.03167" ]
1612.01064#7
Trained Ternary Quantization
ned as: wb â ¼ Bernoulli( Ë w + 1 2 ) Ã 2 â 1 wt â ¼ Bernoulli(| Ë w|) Ã sign( Ë w) (1) Here wb and wt denote binary and ternary weights after quantization. Ë w denotes the latent full precision weight. During back-propagation, as the above quantization equations are not differentiable, derivatives of expectations of the Bernoulli distribution are computed instead, yielding the identity function:
1612.01064#6
1612.01064#8
1612.01064
[ "1502.03167" ]
1612.01064#8
Trained Ternary Quantization
â L â Ë w = â L â wb = â L â wt (2) Here L is the loss to optimize. For BNN with binary connections, only quantized binary values are needed for inference. Therefore a 32Ã smaller model can be deployed into applications. 2 Published as a conference paper at ICLR 2017 3.2 DOREFA-NET Zhou et al. (2016) proposed DoReFa-Net which quantizes weights, activations and gradients of neural networks using different widths of bits.
1612.01064#7
1612.01064#9
1612.01064
[ "1502.03167" ]
1612.01064#9
Trained Ternary Quantization
Therefore with speciï¬ cally designed low-bit multiplication algorithm or hardware, both training and inference stages can be accelerated. They also introduced a much simpler method to quantize 32-bit weights to binary values, deï¬ ned as: wb = E(| Ë w|) à sign( Ë w) (3) Here E(| Ë w|) calculates the mean of absolute values of full precision weights Ë w as layer-wise scaling factors. During back-propagation, Equation 2 still applies. 3.3 TERNARY WEIGHT NETWORKS Li & Liu (2016) proposed TWN (Ternary weight networks), which reduce accuracy loss of binary networks by introducing zero as a third quantized value. They use two symmetric thresholds ±â l and a scaling factor Wl for each layer l to quantize weighs into {â Wl, 0, +Wl}: wt l = Wl : Ë wl > â l 0 : | Ë wl| â ¤ â l â Wl : Ë wl < â â l (4)
1612.01064#8
1612.01064#10
1612.01064
[ "1502.03167" ]
1612.01064#10
Trained Ternary Quantization
They then solve an optimization problem of minimizing L2 distance between full precision and ternary weights to obtain layer-wise values of Wl and â l: â l = 0.7 à E(| Ë wl|) Wl = E iâ {i| Ë wl(i)|>â } (| Ë wl(i)|) (5) And again Equation 2 is used to calculate gradients. While an additional bit is required for ternary weights, TWN achieves a validation accuracy that is very close to full precision networks according to their paper. 3.4 DEEP COMPRESSION Han et al. (2015) proposed deep compression to prune away trivial connections and reduce precision of weights. Unlike above models using zero or symmetric thresholds to quantize high precision weights, Deep Compression used clusters to categorize weights into groups. In Deep Compression, low precision weights are ï¬ ne-tuned from a pre-trained full precision network, and the assignment of each weight is established at the beginning and stay unchanged, while representative value of each cluster is updated throughout ï¬ ne-tuning. # 4 METHOD Our method is illustrated in Figure 1. First, we normalize the full-precision weights to the range [-1, +1] by dividing each weight by the maximum weight. Next, we quantize the intermediate full-resolution weights to {-1, 0, +1} by thresholding. The threshold factor t is a hyper-parameter that is the same across all the layers in order to reduce the search space. Finally, we perform trained quantization by back propagating two gradients, as shown in the dashed lines in Figure 1. We back-propagate gradient1 to the full-resolution weights and gradient2 to the scaling coefï¬
1612.01064#9
1612.01064#11
1612.01064
[ "1502.03167" ]
1612.01064#11
Trained Ternary Quantization
cients. The former enables learning the ternary assignments, and the latter enables learning the ternary values. At inference time, we throw away the full-resolution weights and only use ternary weights. 4.1 LEARNING BOTH TERNARY VALUES AND TERNARY ASSIGNMENTS During gradient descent we learn both the quantized ternary weights (the codebook), and choose which of these values is assigned to each weight (choosing the codebook index). 3 Published as a conference paper at ICLR 2017 Published as a conference paper at ICLR 2017
1612.01064#10
1612.01064#12
1612.01064
[ "1502.03167" ]
1612.01064#12
Trained Ternary Quantization
# Figure 1: Overview of the trained ternary quantization procedure. To learn the ternary value (codebook), we introduce two quantization factors W p and negative weights in each layer l. During feed-forward, quantized ternary weights wt as: W p : Ë wl > â l l 0 : | Ë wl| â ¤ â l : Ë wl < â â l wt l = (6) â W n l Unlike previous work where quantized weights are calculated from 32-bit weights, the scaling coefï¬ - cients W p l are two independent parameters and are trained together with other parameters. Following the rule of gradient descent, derivatives of W p # yn and W7â aL aL aL aL aw? = Ls dupâ aWP = Ls Buf â ielp ielp Here I p l = {i|(i) Ë wl < â â l}.
1612.01064#11
1612.01064#13
1612.01064
[ "1502.03167" ]
1612.01064#13
Trained Ternary Quantization
Furthermore, because of the existence of two scaling factors, gradients of latent full precision weights can no longer be calculated by Equation 2. We use scaled gradients for 32-bit weights: â L â wt l â L â wt l â L â wt l W p l à : Ë wl > â l â L â Ë wl 1 à : | Ë wl| â ¤ â l = (8) W n : Ë wl < â â l l à Note we use scalar number 1 as factor of gradients of zero weights. The overall quantization process is illustrated as Figure 1. The evolution of the ternary weights from different layers during training is shown in Figure 2. We observe that as training proceeds, different layers behave differently: for the ï¬ rst quantized conv layer, the absolute values of W p l get smaller and sparsity gets lower, while for the last conv layer and fully connected layer, the absolute values of W p l get larger and sparsity gets higher. We learn the ternary assignments (index to the codebook) by updating the latent full-resolution weights during training. This may cause the assignments to change between iterations. Note that the thresholds are not constants as the maximal absolute values change over time. Once an updated weight crosses the threshold, the ternary assignment is changed.
1612.01064#12
1612.01064#14
1612.01064
[ "1502.03167" ]
1612.01064#14
Trained Ternary Quantization
The beneï¬ ts of using trained quantization factors are: i) The asymmetry of W p l enables l neural networks to have more model capacity. ii) Quantized weights play the role of "learning rate multipliers" during back propagation. 4 W/' 4.2 QUANTIZATION HEURISTIC In previous work on ternary weight networks, Li & Liu (2016) proposed Ternary Weight Networks (TWN) using ±â l as thresholds to reduce 32-bit weights to ternary values, where ±â l is deï¬ ned as Equation 5. They optimized value of ±â l by minimizing expectation of L2 distance between full precision weights and ternary weights. Instead of using a strictly optimized threshold, we adopt
1612.01064#13
1612.01064#15
1612.01064
[ "1502.03167" ]
1612.01064#15
Trained Ternary Quantization
4 W7" for positive are calculated Published as a conference paper at ICLR 2017 â res1.0/conv1/Wn â rest.OlconviWWp â -â res3.2/conv2iWn â res3.2/conv2/Wp â linearWn â â linearWp 3 S32 3 Bi pee ae 3 0 3 ee = Be A it oe 52 3 TE Negatives ml Zeros ll Positives Negatives ml Zeros i Positives BE Negatives ml Zeros ml Positives 100% Sg 75% 32 5 50% ae % 0% 0 50 400 150 0 50 400 150 0 50 100 150 Epochs # z = 5 Figure 2: Ternary weights value (above) and distribution (below) with iterations for different layers of ResNet-20 on CIFAR-10. different heuristics: 1) use the maximum absolute value of the weights as a reference to the layerâ s threshold and maintain a constant factor t for all layers:
1612.01064#14
1612.01064#16
1612.01064
[ "1502.03167" ]
1612.01064#16
Trained Ternary Quantization
â l = t à max(| Ë w|) (9) and 2) maintain a constant sparsity r for all layers throughout training. By adjusting the hyper- parameter r we are able to obtain ternary weight networks with various sparsities. We use the ï¬ rst method and set t to 0.05 in experiments on CIFAR-10 and ImageNet dataset and use the second one to explore a wider range of sparsities in section 5.1.1. We perform our experiments on CIFAR-10 (Krizhevsky & Hinton, 2009) and ImageNet (Russakovsky et al., 2015). Our network is implemented on both TensorFlow (Abadi & et. al o, 2015) and Caffe (Jia et al., 2014) frameworks. 4.3 CIFAR-10 5 EXPERIMENTS CIFAR-10 is an image classiï¬ cation benchmark containing images of size 32à 32RGB pixels in a training set of 50000 and a test set of 10000. ResNet (He et al., 2015) structure is used for our experiments. We use parameters pre-trained from a full precision ResNet to initialize our model. Learning rate is set to 0.1 at beginning and scaled by 0.1 at epoch 80, 120 and 300. A L2-normalized weight decay
1612.01064#15
1612.01064#17
1612.01064
[ "1502.03167" ]
1612.01064#17
Trained Ternary Quantization
â Full precision â Binary weight (DoReFa-Net) â Ternary weight (Ours) 2 | eA UNO iS £459 5 15% c & 12.5% s 3 10% $ 7.5% 5% 0 50 100 150 Epochs Figure 3: ResNet-20 on CIFAR-10 with different weight precision. 5 Published as a conference paper at ICLR 2017 of 0.0002 is used as regularizer. Most of our models converge after 160 epochs. We take a moving average on errors of all epochs to ï¬
1612.01064#16
1612.01064#18
1612.01064
[ "1502.03167" ]
1612.01064#18
Trained Ternary Quantization
lter off ï¬ uctuations when reporting error rate. We compare our model with the full-precision model and a binary-weight model. We train a a full precision ResNet (He et al., 2016) on CIFAR-10 as the baseline (blue line in Figure 3). We ï¬ ne-tune the trained baseline network as a 1-32-32 DoReFa-Net where weights are 1 bit and both activations and gradients are 32 bits giving a signiï¬ cant loss of accuracy (green line) .
1612.01064#17
1612.01064#19
1612.01064
[ "1502.03167" ]
1612.01064#19
Trained Ternary Quantization
Finally, we ï¬ ne-tuning the baseline with trained ternary weights (red line). Our model has substantial accuracy improvement over the binary weight model, and our loss of accuracy over the full precision model is small. We also compare our model to Tenary Weight Network (TWN) on ResNet-20. Result shows our model improves the accuracy by â ¼ 0.25% on CIFAR-10. We expand our experiments to ternarize ResNet with 32, 44 and 56 layers. All ternary models are ï¬ ne-tuned from full precision models. Our results show that we improve the accuracy of ResNet-32, ResNet-44 and ResNet-56 by 0.04%, 0.16% and 0.36% . The deeper the model, the larger the improvement. We conjecture that this is due to ternary weights providing the right model capacity and preventing overï¬ tting for deeper networks. Model ResNet-20 ResNet-32 ResNet-44 ResNet-56 Full resolution 8.23 7.67 7.18 6.80 Ternary (Ours) 8.87 7.63 7.02 6.44 Improvement -0.64 0.04 0.16 0.36
1612.01064#18
1612.01064#20
1612.01064
[ "1502.03167" ]
1612.01064#20
Trained Ternary Quantization
Table 1: Error rates of full-precision and ternary ResNets on Cifar-10 5.1 IMAGENET We further train and evaluate our model on ILSVRC12(Russakovsky et al. (2015)). ILSVRC12 is a 1000-category dataset with over 1.2 million images in training set and 50 thousand images in validation set. Images from ILSVRC12 also have various resolutions. We used a variant of AlexNet(Krizhevsky et al. (2012)) structure by removing dropout layers and add batch normalization(Ioffe & Szegedy, 2015) for all models in our experiments. The same variant is also used in experiments described in the paper of DoReFa-Net. Our ternary model of AlexNet uses full precision weights for the ï¬ rst convolution layer and the last fully-connected layer. Other layer parameters are all quantized to ternary values. We train our model on ImageNet from scratch using an Adam optimizer (Kingma & Ba (2014)). Minibatch size is set to 128. Learning rate starts at 10â 4 and is scaled by 0.2 at epoch 56 and 64. A L2-normalized weight decay of 5 à 10â 6 is used as a regularizer. Images are ï¬ rst resized to 256 à 256 then randomly cropped to 224 à 224 before input.
1612.01064#19
1612.01064#21
1612.01064
[ "1502.03167" ]
1612.01064#21
Trained Ternary Quantization
We report both top 1 and top 5 error rate on validation set. We compare our model to a full precision baseline, 1-32-32 DoReFa-Net and TWN. After around 64 epochs, validation error of our model dropped signiï¬ cantly compared to other low-bit networks as well as the full precision baseline. Finally our model reaches top 1 error rate of 42.5%, while DoReFa-Net gets 46.1% and TWN gets 45.5%. Furthermore, our model still outperforms full precision AlexNet (the batch normalization version, 44.1% according to paper of DoReFa-Net) by 1.6%, and is even better than the best AlexNet results reported (42.8%1). The complete results are listed in Table 2. Error Top1 Top5 Full precision 42.8% 19.7% 1-bit (DoReFa) 46.1% 23.7% 2-bit 2-bit (TWN) (Ours) 45.5% 42.5% 23.2% 20.3% Table 2: Top1 and Top5 error rate of AlexNet on ImageNet # 1https://github.com/BVLC/caffe/wiki/Models-accuracy-on-ImageNet-2012-val
1612.01064#20
1612.01064#22
1612.01064
[ "1502.03167" ]
1612.01064#22
Trained Ternary Quantization
6 Published as a conference paper at ICLR 2017 â DoReFa-Net â TWN â Ours --- Full precision (with Dropout) 80% Train Validation 60% > Top1 40% 42.8% Tops 20% 19.8% 0% Figure 4: Training and validation accuracy of AlexNet on ImageNet We draw the process of training in Figure 4, the baseline results of AlexNet are marked with dashed lines. Our ternary model effectively reduces the gap between training and validation performance, which appears to be quite great for DoReFa-Net and TWN. This indicates that adopting trainable W p l and W n We also report the results of our methods on ResNet-18B in Table 3. The full-precision error rates are obtained from Facebookâ s implementation. Here we cite Binarized Weight Network(BWN)Rastegari et al. (2016) results with all layers quantized and TWN ï¬ netuned based on a full precision network, while we train our TTQ model from scratch. Compared with BWN and TWN, our method obtains a substantial improvement. Error Top1 Top5 Full precision 30.4% 10.8% 2-bit 2-bit 1-bit (BWN) (Ours) (TWN) 39.2% 34.7% 33.4% 17.0% 13.8% 12.8% Table 3: Top1 and Top5 error rate of ResNet-18 on ImageNet # 6 DISCUSSION In this section we analyze performance of our model with regard to weight compression and inference speeding up. These two goals are achieved through reducing bit precision and introducing sparsity. We also visualize convolution kernels in quantized convolution layers to ï¬ nd that basic patterns of edge/corner detectors are also well learned from scratch even precision is low. 6.1 SPATIAL AND ENERGY EFFICIENCY We save storage for models by 16à by using ternary weights. Although switching from a binary- weight network to a ternary-weight network increases bits per weight, it brings sparsity to the weights, which gives potential to skip the computation on zero weights and achieve higher energy efï¬ ciency. 6.1.1 TRADE-OFF BETWEEN SPARSITY AND ACCURACY
1612.01064#21
1612.01064#23
1612.01064
[ "1502.03167" ]
1612.01064#23
Trained Ternary Quantization
Figure 5 shows the relationship between sparsity and accuracy. As the sparsity of weights grows from 0 (a pure binary-weight network) to 0.5 (a ternary network with 50% zeros), both the training and validation error decrease. Increasing sparsity beyond 50% reduces the model capacity too far, increasing error. Minimum error occurs with sparsity between 30% and 50%. We introduce only one hyper-parameter to reduce search space. This hyper-parameter can be either sparsity, or the threshold t w.r.t the max value in Equation 6.
1612.01064#22
1612.01064#24
1612.01064
[ "1502.03167" ]
1612.01064#24
Trained Ternary Quantization
We ï¬ nd that using threshold produces better results. This is because ï¬ xing the threshold allows the sparsity of each layer to vary (Figure refï¬ g:weights). 7 Published as a conference paper at ICLR 2017 # Validation Error # Train Error 18% 16% 14% 12% 10% _ Full Precision 8% 8% Error Rate 6% 4% 2% 0% w/o pruning 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Sparsity: percentage of zero weights Figure 5: v.s. Sparsity on ResNet-20 Figure 5:
1612.01064#23
1612.01064#25
1612.01064
[ "1502.03167" ]
1612.01064#25
Trained Ternary Quantization
Accuracy v.s. Sparsity on ResNet-20 # 6.1.2 SPARSITY AND EFFICIENCY OF ALEXNET We further analyze parameters from our AlexNet model. We calculate layer-wise density (complement of sparsity) as shown in Table 4. Despite we use different W p for each layer, ternary weights can be pre-computed when fetched from memory, thus multiplications during convolution and inner product process are still saved. Compared to Deep Compression, we accelerate inference speed using ternary values and more importantly, we reduce energy consumption of inference by saving memory references and multiplications, while achieving higher accuracy. We notice that without all quantized layers sharing the same t for Equation 9, our model achieves considerable sparsity in convolution layers where the majority of computations takes place. Therefore we are able to squeeze forward time to less than 30% of full precision networks. As for spatial compression, by substituting 32-bit weights with 2-bit ternary weights, our model is approximately 16à smaller than original 32-bit AlexNet. 6.2 KERNEL VISUALIZATION We visualize quantized convolution kernels in Figure 6. The left matrix is kernels from the second convolution layer (5 à 5) and the right one is from the third (3 à 3). We pick ï¬ rst 10 input channels and ï¬ rst 10 output channels to display for each layer.
1612.01064#24
1612.01064#26
1612.01064
[ "1502.03167" ]
1612.01064#26
Trained Ternary Quantization
Grey, black and white color represent zero, negative and positive weights respectively. We observe similar ï¬ lter patterns as full precision AlexNet. Edge and corner detectors of various directions can be found among listed kernels. While these patterns are important for convolution neural networks, the precision of each weight is not. Ternary value ï¬ lters are capable enough extracting key features after a full precision ï¬ rst convolution layer while saving unnecessary storage. Furthermore, we ï¬ nd that there are a number of empty ï¬ lters (all zeros) or ï¬ lters with single non-zero value in convolution layers. More aggressive pruning can be applied to prune away these redundant kernels to further compress and speed up our model. Layer conv1 conv2 conv3 conv4 conv5 conv total fc1 fc2 fc3 fc total All total Pruning (NIPSâ 15) Density Width Density Width 8 bit 100% 32 bit 8 bit 100% 32 bit 8 bit 100% 32 bit 8 bit 100% 32 bit 8 bit 100% 32 bit 100% - 5 bit 100% 32 bit 5 bit 100% 32 bit 5 bit 100% 32 bit - 100% - 100% Full precision 84% 38% 35% 37% 37% 37% 9% 9% 25% 10% 11% - - - Ours Density Width 32 bit 100% 2 bit 23% 2 bit 24% 2 bit 40% 2 bit 43% - 33% 2 bit 30% 2 bit 36% 32 bit 100% - 37% - 37% Table 4: Alexnet layer-wise sparsity
1612.01064#25
1612.01064#27
1612.01064
[ "1502.03167" ]
1612.01064#27
Trained Ternary Quantization
8 Published as a conference paper at ICLR 2017 Published as a conference paper at ICLR 2017 Figure 6: Visualization of kernels from Ternary AlexNet trained from Imagenet. # 7 CONCLUSION We introduce a novel neural network quantization method that compresses network weights to ternary values. We introduce two trained scaling coefï¬ cients W l n for each layer and train these coefï¬ cients using back-propagation. During training, the gradients are back-propagated both to the latent full-resolution weights and to the scaling coefï¬ cients. We use layer-wise thresholds that are proportional to the maximum absolute values to quantize the weights. When deploying the ternary network, only the ternary weights and scaling coefï¬ cients are needed, which reducing parameter size by at least 16à .
1612.01064#26
1612.01064#28
1612.01064
[ "1502.03167" ]
1612.01064#28
Trained Ternary Quantization
Experiments show that our model reaches or even surpasses the accuracy of full precision models on both CIFAR-10 and ImageNet dataset. On ImageNet we exceed the accuracy of prior ternary networks (TWN) by 3%. 9 Published as a conference paper at ICLR 2017 # REFERENCES Martà n Abadi and et. al o. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorï¬ ow.org.
1612.01064#27
1612.01064#29
1612.01064
[ "1502.03167" ]
1612.01064#29
Trained Ternary Quantization
Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015. Matthieu Courbariaux, Itay Hubara, COM Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training neural networks with weights and activations constrained to+ 1 or-. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David.
1612.01064#28
1612.01064#30
1612.01064
[ "1502.03167" ]
1612.01064#30
Trained Ternary Quantization
Binaryconnect: Training deep neural networks In Advances in Neural Information Processing Systems, pp. with binary weights during propagations. 3123â 3131, 2015. Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural net- works: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016.
1612.01064#29
1612.01064#31
1612.01064
[ "1502.03167" ]
1612.01064#31
Trained Ternary Quantization
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar- rama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
1612.01064#30
1612.01064#32
1612.01064
[ "1502.03167" ]
1612.01064#32
Trained Ternary Quantization
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. In F. Pereira, C. classiï¬ cation with and pp. URL http://papers.nips.cc/paper/ Imagenet deep convolutional neural networks. K. Q.
1612.01064#31
1612.01064#33
1612.01064
[ "1502.03167" ]
1612.01064#33
Trained Ternary Quantization
Weinberger 1097â 1105. Curran Associates, 4824-imagenet-classification-with-deep-convolutional-neural-networks. pdf. J. C. Burges, L. Bottou, Information Processing Systems 25, (eds.), Advances Inc., in Neural 2012. Fengfu Li and Bin Liu. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016. Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬ cation using binary convolutional neural networks. arXiv preprint arXiv:1603.05279, 2016. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â 252, 2015. doi: 10.1007/s11263-015-0816-y. Shuchang Zhou, Zekun Ni, Xinyu Zhou, He Wen, Yuxin Wu, and Yuheng Zou.
1612.01064#32
1612.01064#34
1612.01064
[ "1502.03167" ]
1612.01064#34
Trained Ternary Quantization
Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016. 10
1612.01064#33
1612.01064
[ "1502.03167" ]
1611.10012#0
Speed/accuracy trade-offs for modern convolutional object detectors
7 1 0 2 r p A 5 2 ] V C . s c [ 3 v 2 1 0 0 1 . 1 1 6 1 : v i X r a # Speed/accuracy trade-offs for modern convolutional object detectors Vivek Rathod Ian Fischer Chen Sun Zbigniew Wojna Menglong Zhu Yang Song Anoop Korattikara Sergio Guadarrama Kevin Murphy Google Research # Abstract The goal of this paper is to serve as a guide for se- lecting a detection architecture that achieves the right speed/memory/accuracy balance for a given application and platform. To this end, we investigate various ways to trade accuracy for speed and memory usage in modern con- volutional object detection systems. A number of successful systems have been proposed in recent years, but apples-to- apples comparisons are difï¬ cult due to different base fea- ture extractors (e.g., VGG, Residual Networks), different default image resolutions, as well as different hardware and software platforms. We present a uniï¬ ed implementation of the Faster R-CNN [31], R-FCN [6] and SSD [26] systems, which we view as â meta-architecturesâ and trace out the speed/accuracy trade-off curve created by using alterna- tive feature extractors and varying other critical parameters such as image size within each of these meta-architectures. On one extreme end of this spectrum where speed and mem- ory are critical, we present a detector that achieves real time speeds and can be deployed on a mobile device. On the opposite end in which accuracy is critical, we present a detector that achieves state-of-the-art performance mea- sured on the COCO detection task.
1611.10012#1
1611.10012
[ "1512.00567" ]