doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1612.01543 | 37 | Figure 2: Accuracy versus average codeword length per network parameter after network quanti- zation, Huffman coding and ï¬ne-tuning for LeNet and 32-layer ResNet when Hessian is computed with 50,000 or 1,000 samples and when the square roots of the second moment estimates of gradients are used instead of Hessian as an alternative.
Figure 2 shows the performance of Hessian-weighted k-means clustering when Hessian is computed with a small number of samples (1,000 samples). Observe that even using the Hessian computed with a small number of samples yields almost the same performance. We also show the performance of Hessian-weighted k-means clustering when an alternative of Hessian is used instead of Hessian as explained in Section 3.5. In particular, the square roots of the second moment estimates of gradients are used instead of Hessian, and using this alternative provides similar performance to using Hessian. | 1612.01543#37 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 38 | In Table 1, we summarize the compression ratios that we can achieve with different network quanti- zation methods for pruned models. The original network parameters are 32-bit ï¬oat numbers. Using the simple uniform quantization followed by Huffman coding, we achieve the compression ratios of 51.25, 22.17 and 40.65 (i.e., the compressed model sizes are 1.95%, 4.51% and 2.46% of the original model sizes) for LeNet, 32-layer ResNet and AlexNet, respectively, at no or marginal per- formance loss. Observe that the loss in the compressed AlexNet is mainly due to pruning. Here, we also compare our network quantization results to the ones in Han et al. (2015a). Note that layer-by- layer quantization with k-means clustering is evaluated in Han et al. (2015a) while our quantization schemes including k-means clustering are employed to quantize network parameters of all layers together at once (see Section 3.6).
# 6 CONCLUSION | 1612.01543#38 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 39 | # 6 CONCLUSION
This paper investigates the quantization problem of network parameters in deep neural networks. We identify the suboptimality of the conventional quantization method using k-means clustering and newly design network quantization schemes so that they can minimize the performance loss due to quantization given a compression ratio constraint. In particular, we analytically show that Hessian can be used as a measure of the importance of network parameters and propose to minimize Hessian- weighted quantization errors in average for clustering network parameters to quantize. Hessian- weighting is beneï¬cial in quantizing all of the network parameters together at once since it can handle the different impact of quantization errors properly not only within layers but also across layers. Furthermore, we make a connection from the network quantization problem to the entropy- constrained data compression problem in information theory and push the compression ratio to the limit that information theory provides. Two efï¬cient heuristic solutions are presented to this end, i.e., uniform quantization and an iterative solution for ECSQ. Our experiment results show that the proposed network quantization schemes provide considerable gain over the conventional method using k-means clustering, in particular for large and deep neural networks.
# REFERENCES | 1612.01543#39 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 41 | 9
Published as a conference paper at ICLR 2017
Table 1: Summary of network quantization results with Huffman coding for pruned models.
Accuracy % Compression ratio - 10.13 44.58 47.16 51.25 49.01 39.00 - 4.52 18.25 20.51 22.17 21.01 N/A - 7.91 30.53 33.71 40.65 35.00 99.25 99.27 99.27 99.27 99.28 99.27 99.26 92.58 92.58 92.64 92.67 92.68 92.73 N/A 57.16 56.00 56.12 56.04 56.20 57.22 Original model Pruned model k-means Hessian-weighted k-means Uniform quantization Iterative ECSQ Pruning + Quantization all layers + Huffman coding LeNet Deep compression (Han et al., 2015a) Original model Pruned model k-means Hessian-weighted k-means Uniform quantization Iterative ECSQ Pruning + Quantization all layers + Huffman coding ResNet Deep compression (Han et al., 2015a) Original model Pruned model Pruning + Quantization all layers + Huffman coding Deep compression (Han et al., 2015a) k-means Alt-Hessian-weighted k-means Uniform quantization AlexNet | 1612.01543#41 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 42 | and Signal Processing, pp. 1131â1135, 2015.
Sue Becker and Yann Le Cun. Improving the convergence of back-propagation learning with second In Proceedings of the Connectionist Models Summer School, pp. 29â37. San order methods. Matteo, CA: Morgan Kaufmann, 1988.
Philip A Chou, Tom Lookabaugh, and Robert M Gray. Entropy-constrained vector quantization. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(1):31â42, 1989.
Matthieu Courbariaux, Jean-Pierre David, and Yoshua Bengio. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024, 2014.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pp. 3123â3131, 2015.
Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121â2159, 2011. | 1612.01543#42 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 43 | Herbert Gish and John Pierce. Asymptotically efï¬cient quantizing. IEEE Transactions on Informa- tion Theory, 14(5):676â683, 1968.
Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net- works using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1737â1746, 2015.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a.
10
Published as a conference paper at ICLR 2017
Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efï¬cient neural network. In Advances in Neural Information Processing Systems, pp. 1135â1143, 2015b. | 1612.01543#43 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 44 | Babak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems, pp. 164â171, 1993.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. arXiv preprint arXiv:1512.03385, 2015.
Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. In Proceedings of the British Machine Vision Conference, 2014.
Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. Com- pression of deep convolutional neural networks for fast and low power mobile applications. arXiv preprint arXiv:1511.06530, 2015.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. | 1612.01543#44 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 45 | Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097â1105, 2012.
Yann Le Cun. Mod`eles connexionnistes de lâapprentissage. PhD thesis, Paris 6, 1987.
Vadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2554â2564, 2016.
Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempitsky. Speeding-up convolutional neural networks using ï¬ne-tuned CP-decomposition. arXiv preprint arXiv:1412.6553, 2014.
Yann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain damage. In Advances in Neural Information Processing Systems, pp. 598â605, 1989. | 1612.01543#45 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 46 | Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436â444, 2015.
Darryl D Lin, Sachin S Talathi, and V Sreekanth Annapureddy. Fixed point quantization of deep convolutional networks. arXiv preprint arXiv:1511.06393, 2015a.
Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015b.
Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolu- tional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 806â814, 2015.
Michael C Mozer and Paul Smolensky. Skeletonization: A technique for trimming the fat from a network via relevance assessment. In Advances in Neural Information Processing Systems, pp. 107â115, 1989. | 1612.01543#46 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 47 | Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov. Tensorizing neural networks. In Advances in Neural Information Processing Systems, pp. 442â450, 2015.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: Imagenet classiï¬cation using binary convolutional neural networks. arXiv preprint arXiv:1603.05279, 2016.
11
Published as a conference paper at ICLR 2017
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015.
Tara N Sainath, Brian Kingsbury, Vikas Sindhwani, Ebru Arisoy, and Bhuvana Ramabhadran. Low- rank matrix factorization for deep neural network training with high-dimensional output targets. In IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6655â6659, 2013. | 1612.01543#47 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 48 | Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â9, 2015a.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re- thinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015b.
Cheng Tai, Tong Xiao, Xiaogang Wang, et al. Convolutional neural networks with low-rank regu- larization. arXiv preprint arXiv:1511.06067, 2015.
Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 2012. | 1612.01543#48 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 49 | Vincent Vanhoucke, Andrew Senior, and Mark Z Mao. Improving the speed of neural networks on CPUs. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS, 2011.
Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pp. 2074â2082, 2016.
Jian Xue, Jinyu Li, and Yifan Gong. Restructuring of deep neural network acoustic models with singular value decomposition. In INTERSPEECH, pp. 2365â2369, 2013.
Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep fried convnets. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1476â1483, 2015.
Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
12
Published as a conference paper at ICLR 2017
# A APPENDIX
A.1 FURTHER DISCUSSION ON THE HESSIAN-WEIGHTED QUANTIZATION ERROR | 1612.01543#49 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 50 | 12
Published as a conference paper at ICLR 2017
# A APPENDIX
A.1 FURTHER DISCUSSION ON THE HESSIAN-WEIGHTED QUANTIZATION ERROR
The diagonal approximation for Hessian simpliï¬es the optimization problem as well as its solution for network quantization. This simpliï¬cation comes with some performance loss. We conjecture that the loss due to this approximation is small. The reason is that the contributions from off-diagonal terms are not always additive and their summation may end up with a small value. However, diagonal terms are all non-negative and therefore their contributions are always additive. We do not verify this conjecture in this paper since solving the problem without diagonal approximation is too complex; we even need to compute the whole Hessian matrix, which is also too costly.
Observe that the relation of the Hessian-weighted distortion measure to the quantization loss holds for any model for which the objective function can be approximated as a quadratic function with respect to the parameters to quantize in the model. Hence, the quantization methods proposed in this paper to minimize the Hessian-weighted distortion measure are not speciï¬c to neural networks but are generally applicable to quantization of parameters of any model whose objective function is locally quadratic with respect to its parameters approximately. | 1612.01543#50 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 51 | Finally, we do not consider the interactions between quantization and retraining in our formulation in Section 3.2. We analyze the expected loss due to quantization assuming no further retraining and focus on ï¬nding optimal network quantization schemes that minimize the performance loss. In our experiments, however, we further ï¬ne-tune the quantized values (cluster centers) so that we can recover the loss due to quantization and improve the performance.
A.2 EXPERIMENT RESULTS FOR UNIFORM QUANTIZATION
We compare uniform quantization with non-weighted mean and uniform quantization with Hessian- weighted mean in Figure 3, which shows that uniform quantization with Hessian-weighted mean slightly outperforms uniform quantization with non-weighted mean. | 1612.01543#51 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 52 | 100 100 90 90 80 80 ) % ( 70 60 ) % ( 70 60 y c a r u c c A 50 40 30 y c a r u c c A 50 40 30 20 20 10 0 0 Uniform with nonâweighted mean Uniform with Hessianâweighted mean 2 Average codeword length (bits) 1 3 (a) Huffman coding 4 Uniform with nonâweighted mean Uniform with Hessianâweighted mean 10 0 0 2 Average codeword length (bits) (b) Huffman coding + ï¬ne-tuning 1 3 4
Figure 3: Accuracy versus average codeword length per network parameter after network quanti- zation, Huffman coding and ï¬ne-tuning for 32-layer ResNet when uniform quantization with non- weighted mean and uniform quantization with Hessian-weighted mean are used.
# A.3 FURTHER DISCUSSION ON THE ITERATIVE ALGORITHM FOR ECSQ
In order to solve the ECSQ problem for network quantization, we deï¬ne a Lagrangian cost function: | 1612.01543#52 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 55 | Algorithm 1 Iterative solution for entropy-constrained network quantization
# Initialization: n â 0
Initialize the centers of k clusters: c(0) 1 , . . . , c(0) Initialize the proportions of k clusters (set all of them to be the same initially): p(0) k 1 , . . . , p(0) k Assignment: for all network parameters i = 1 â N do Assign wi to the cluster j that minimizes the individual Lagrangian cost as follows: end for C(n+1) l â C(n+1) l ⪠{wi} for l = argmin j hii|wi â c(n) n j |2 â λ log2 p(n) j o Update: for all clusters j = 1 â k do Update the cluster center and the proportion of cluster j: c(n+1) j â wiâC(n+1) j hiiwi P wiâC(n+1) j hii and p(n+1) j â |C(n+1) j N | end for n â n + 1 P
# repeat | 1612.01543#55 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01543 | 56 | until Lagrangian cost function Jλ decreases less than some threshold
The entropy-constrained network quantization problem is then reduced to ï¬nd k partitions (clusters) C1, C2, . . . , Ck that minimize the Lagrangian cost function as follows:
argmin C1,C2,...,Ck Jλ(C1, C2, . . . , Ck).
A heuristic iterative algorithm to solve this method of Lagrange multipliers for network quantization is presented in Algorithm 1. It is similar to Lloydâs algorithm for k-means clustering. The key difference is how to partition network parameters at the assignment step. In Lloydâs algorithm, the Euclidean distance (quantization error) is minimized. For ECSQ, the individual Lagrangian cost function, i.e., dλ(i, j) in (12), is minimized instead, which includes both quantization error and expected codeword length after entropy coding.
14 | 1612.01543#56 | Towards the Limit of Network Quantization | Network quantization is one of network compression techniques to reduce the
redundancy of deep neural networks. It reduces the number of distinct network
parameter values by quantization in order to save the storage for them. In this
paper, we design network quantization schemes that minimize the performance
loss due to quantization given a compression ratio constraint. We analyze the
quantitative relation of quantization errors to the neural network loss
function and identify that the Hessian-weighted distortion measure is locally
the right objective function for the optimization of network quantization. As a
result, Hessian-weighted k-means clustering is proposed for clustering network
parameters to quantize. When optimal variable-length binary codes, e.g.,
Huffman codes, are employed for further compression, we derive that the network
quantization problem can be related to the entropy-constrained scalar
quantization (ECSQ) problem in information theory and consequently propose two
solutions of ECSQ for network quantization, i.e., uniform quantization and an
iterative solution similar to Lloyd's algorithm. Finally, using the simple
uniform quantization followed by Huffman coding, we show from our experiments
that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet,
32-layer ResNet and AlexNet, respectively. | http://arxiv.org/pdf/1612.01543 | Yoojin Choi, Mostafa El-Khamy, Jungwon Lee | cs.CV, cs.LG, cs.NE | Published as a conference paper at ICLR 2017 | null | cs.CV | 20161205 | 20171113 | [
{
"id": "1510.03009"
},
{
"id": "1511.06067"
},
{
"id": "1510.00149"
},
{
"id": "1511.06393"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1511.06530"
}
] |
1612.01064 | 1 | Deep neural networks are widely used in machine learning applications. However, the deployment of large neural networks models can be difï¬cult to deploy on mobile devices with limited power budgets. To solve this problem, we propose Trained Ternary Quantization (TTQ), a method that can reduce the precision of weights in neural networks to ternary values. This method has very little accuracy degradation and can even improve the accuracy of some models (32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet model is trained from scratch, which means itâs as easy as to train normal full precision model. We highlight our trained quantization method that can learn both ternary values and ternary assignment. During inference, only ternary values (2-bit weights) and scaling factors are needed, therefore our models are nearly 16à smaller than full- precision models. Our ternary models can also be viewed as sparse binary weight networks, which can potentially be accelerated with custom circuit. Experiments on CIFAR-10 show that the ternary models obtained by trained quantization method outperform full-precision models of ResNet-32,44,56 by 0.04%, | 1612.01064#1 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 3 | # INTRODUCTION
Deep neural networks are becoming the preferred approach for many machine learning applications. However, as networks get deeper, deploying a network with a large number of parameters on a small device becomes increasingly difï¬cult. Much work has been done to reduce the size of networks. Half- precision networks (Amodei et al., 2015) cut sizes of neural networks in half. XNOR-Net (Rastegari et al., 2016), DoReFa-Net (Zhou et al., 2016) and network binarization (Courbariaux et al.; 2015; Lin et al., 2015) use aggressively quantized weights, activations and gradients to further reduce computation during training. While weight binarization beneï¬ts from 32à smaller model size, the extreme compression rate comes with a loss of accuracy. Hubara et al. (2016) and Li & Liu (2016) propose ternary weight networks to trade off between model size and accuracy. | 1612.01064#3 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 4 | In this paper, we propose Trained Ternary Quantization which uses two full-precision scaling coefï¬cients W p for each layer l, and quantize the weights to {âW n l } instead of traditional {-1, 0, +1} or {-E, 0, +E} where E is the mean of the absolute weight value, which is not learned. Our positive and negative weights have different absolute values W p that are trainable parameters. We also maintain latent full-precision weights at training time, and discard them at test time. We back propagate the gradient to both W p l and to the latent full-precision weights. This makes it possible to adjust the ternary assignment (i.e. which of the three values a weight is assigned).
Our quantization method, achieves higher accuracy on the CIFAR-10 and ImageNet datasets. For AlexNet on ImageNet dataset, our method outperforms previously state-of-art ternary network(Li &
âWork done while at Stanford CVA lab.
1
Published as a conference paper at ICLR 2017 | 1612.01064#4 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 5 | âWork done while at Stanford CVA lab.
1
Published as a conference paper at ICLR 2017
Liu, 2016) by 3.0% of Top-1 accuracy and the full-precision model by 1.6%. By converting most of the parameters to 2-bit values, we also compress the network by about 16x. Moreover, the advantage of few multiplications still remains, because W p l are ï¬xed for each layer during inference. On custom hardware, multiplications can be pre-computed on activations, so only two multiplications per activation are required.
# 2 MOTIVATIONS
The potential of deep neural networks, once deployed to mobile devices, has the advantage of lower latency, no reliance on the network, and better user privacy. However, energy efï¬ciency becomes the bottleneck for deploying deep neural networks on mobile devices because mobile devices are battery constrained. Current deep neural network models consist of hundreds of millions of parameters. Reducing the size of a DNN model makes the deployment on edge devices easier. | 1612.01064#5 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 6 | First, a smaller model means less overhead when exporting models to clients. Take autonomous driving for example; Tesla periodically copies new models from their servers to customersâ cars. Smaller models require less communication in such over-the-air updates, making frequent updates more feasible. Another example is on Apple Store; apps above 100 MB will not download until you connect to Wi-Fi. Itâs infeasible to put a large DNN model in an app. The second issue is energy consumption. Deep learning is energy consuming, which is problematic for battery-constrained mobile devices. As a result, iOS 10 requires iPhone to be plugged with charger while performing photo analysis. Fetching DNN models from memory takes more than two orders of magnitude more energy than arithmetic operations. Smaller neural networks require less memory bandwidth to fetch the model, saving the energy and extending battery life. The third issue is area cost. When deploying DNNs on Application-Speciï¬c Integrated Circuits (ASICs), a sufï¬ciently small model can be stored directly on-chip, and smaller models enable a smaller ASIC die. | 1612.01064#6 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 7 | Several previous works aimed to improve energy and spatial efï¬ciency of deep networks. One common strategy proven useful is to quantize 32-bit weights to one or two bits, which greatly reduces model size and saves memory reference. However, experimental results show that compressed weights usually come with degraded performance, which is a great loss for some performance- sensitive applications. The contradiction between compression and performance motivates us to work on trained ternary quantization, minimizing performance degradation of deep neural networks while saving as much energy and space as possible.
# 3 RELATED WORK
3.1 BINARY NEURAL NETWORK (BNN)
Lin et al. (2015) proposed binary and ternary connections to compress neural networks and speed up computation during inference. They used similar probabilistic methods to convert 32-bit weights into binary values or ternary values, deï¬ned as:
wb â¼ Bernoulli( Ëw + 1 2 ) Ã 2 â 1 wt â¼ Bernoulli(| Ëw|) Ã sign( Ëw) (1)
Here wb and wt denote binary and ternary weights after quantization. Ëw denotes the latent full precision weight.
During back-propagation, as the above quantization equations are not differentiable, derivatives of expectations of the Bernoulli distribution are computed instead, yielding the identity function: | 1612.01064#7 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 8 | During back-propagation, as the above quantization equations are not differentiable, derivatives of expectations of the Bernoulli distribution are computed instead, yielding the identity function:
âL â Ëw = âL âwb = âL âwt (2)
Here L is the loss to optimize.
For BNN with binary connections, only quantized binary values are needed for inference. Therefore a 32Ã smaller model can be deployed into applications.
2
Published as a conference paper at ICLR 2017
3.2 DOREFA-NET
Zhou et al. (2016) proposed DoReFa-Net which quantizes weights, activations and gradients of neural networks using different widths of bits. Therefore with speciï¬cally designed low-bit multiplication algorithm or hardware, both training and inference stages can be accelerated.
They also introduced a much simpler method to quantize 32-bit weights to binary values, deï¬ned as:
wb = E(| Ëw|) Ã sign( Ëw) (3)
Here E(| Ëw|) calculates the mean of absolute values of full precision weights Ëw as layer-wise scaling factors. During back-propagation, Equation 2 still applies.
3.3 TERNARY WEIGHT NETWORKS | 1612.01064#8 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 9 | 3.3 TERNARY WEIGHT NETWORKS
Li & Liu (2016) proposed TWN (Ternary weight networks), which reduce accuracy loss of binary networks by introducing zero as a third quantized value. They use two symmetric thresholds ±âl and a scaling factor Wl for each layer l to quantize weighs into {âWl, 0, +Wl}:
wt l = Wl : Ëwl > âl 0 : | Ëwl| ⤠âl âWl : Ëwl < ââl (4)
They then solve an optimization problem of minimizing L2 distance between full precision and ternary weights to obtain layer-wise values of Wl and âl:
âl = 0.7 Ã E(| Ëwl|) Wl = E iâ{i| Ëwl(i)|>â} (| Ëwl(i)|) (5)
And again Equation 2 is used to calculate gradients. While an additional bit is required for ternary weights, TWN achieves a validation accuracy that is very close to full precision networks according to their paper.
3.4 DEEP COMPRESSION | 1612.01064#9 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 10 | 3.4 DEEP COMPRESSION
Han et al. (2015) proposed deep compression to prune away trivial connections and reduce precision of weights. Unlike above models using zero or symmetric thresholds to quantize high precision weights, Deep Compression used clusters to categorize weights into groups. In Deep Compression, low precision weights are ï¬ne-tuned from a pre-trained full precision network, and the assignment of each weight is established at the beginning and stay unchanged, while representative value of each cluster is updated throughout ï¬ne-tuning.
# 4 METHOD
Our method is illustrated in Figure 1. First, we normalize the full-precision weights to the range [-1, +1] by dividing each weight by the maximum weight. Next, we quantize the intermediate full-resolution weights to {-1, 0, +1} by thresholding. The threshold factor t is a hyper-parameter that is the same across all the layers in order to reduce the search space. Finally, we perform trained quantization by back propagating two gradients, as shown in the dashed lines in Figure 1. We back-propagate gradient1 to the full-resolution weights and gradient2 to the scaling coefï¬cients. The former enables learning the ternary assignments, and the latter enables learning the ternary values.
At inference time, we throw away the full-resolution weights and only use ternary weights. | 1612.01064#10 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 11 | At inference time, we throw away the full-resolution weights and only use ternary weights.
4.1 LEARNING BOTH TERNARY VALUES AND TERNARY ASSIGNMENTS
During gradient descent we learn both the quantized ternary weights (the codebook), and choose which of these values is assigned to each weight (choosing the codebook index).
3
Published as a conference paper at ICLR 2017
Published as a conference paper at ICLR 2017
# Figure 1: Overview of the trained ternary quantization procedure.
To learn the ternary value (codebook), we introduce two quantization factors W p and negative weights in each layer l. During feed-forward, quantized ternary weights wt as:
W p : Ëwl > âl l 0 : | Ëwl| ⤠âl : Ëwl < ââl wt l = (6) âW n l
Unlike previous work where quantized weights are calculated from 32-bit weights, the scaling coefï¬- cients W p l are two independent parameters and are trained together with other parameters. Following the rule of gradient descent, derivatives of W p
# yn and W7â
aL aL aL aL aw? = Ls dupâ aWP = Ls Buf â ielp ielp | 1612.01064#11 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 12 | # yn and W7â
aL aL aL aL aw? = Ls dupâ aWP = Ls Buf â ielp ielp
Here I p l = {i|(i) Ëwl < ââl}. Furthermore, because of the existence of two scaling factors, gradients of latent full precision weights can no longer be calculated by Equation 2. We use scaled gradients for 32-bit weights:
âL âwt l âL âwt l âL âwt l W p l à : Ëwl > âl âL â Ëwl 1 à : | Ëwl| ⤠âl = (8) W n : Ëwl < ââl l Ã
Note we use scalar number 1 as factor of gradients of zero weights. The overall quantization process is illustrated as Figure 1. The evolution of the ternary weights from different layers during training is shown in Figure 2. We observe that as training proceeds, different layers behave differently: for the ï¬rst quantized conv layer, the absolute values of W p l get smaller and sparsity gets lower, while for the last conv layer and fully connected layer, the absolute values of W p l get larger and sparsity gets higher. | 1612.01064#12 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 13 | We learn the ternary assignments (index to the codebook) by updating the latent full-resolution weights during training. This may cause the assignments to change between iterations. Note that the thresholds are not constants as the maximal absolute values change over time. Once an updated weight crosses the threshold, the ternary assignment is changed. The beneï¬ts of using trained quantization factors are: i) The asymmetry of W p l enables l neural networks to have more model capacity. ii) Quantized weights play the role of "learning rate multipliers" during back propagation.
4 W/'
4.2 QUANTIZATION HEURISTIC
In previous work on ternary weight networks, Li & Liu (2016) proposed Ternary Weight Networks (TWN) using ±âl as thresholds to reduce 32-bit weights to ternary values, where ±âl is deï¬ned as Equation 5. They optimized value of ±âl by minimizing expectation of L2 distance between full precision weights and ternary weights. Instead of using a strictly optimized threshold, we adopt
4
W7" for positive are calculated
Published as a conference paper at ICLR 2017 | 1612.01064#13 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 14 | 4
W7" for positive are calculated
Published as a conference paper at ICLR 2017
â res1.0/conv1/Wn â rest.OlconviWWp â-â res3.2/conv2iWn â res3.2/conv2/Wp âlinearWn â â linearWp 3 S32 3 Bi pee ae 3 0 3 ee = Be A it oe 52 3 TE Negatives ml Zeros ll Positives Negatives ml Zeros i Positives BE Negatives ml Zeros ml Positives 100% Sg 75% 32 5 50% ae % 0% 0 50 400 150 0 50 400 150 0 50 100 150 Epochs
# z
=
5
Figure 2: Ternary weights value (above) and distribution (below) with iterations for different layers of ResNet-20 on CIFAR-10.
different heuristics: 1) use the maximum absolute value of the weights as a reference to the layerâs threshold and maintain a constant factor t for all layers:
âl = t à max(| Ëw|) (9) | 1612.01064#14 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 15 | âl = t à max(| Ëw|) (9)
and 2) maintain a constant sparsity r for all layers throughout training. By adjusting the hyper- parameter r we are able to obtain ternary weight networks with various sparsities. We use the ï¬rst method and set t to 0.05 in experiments on CIFAR-10 and ImageNet dataset and use the second one to explore a wider range of sparsities in section 5.1.1.
We perform our experiments on CIFAR-10 (Krizhevsky & Hinton, 2009) and ImageNet (Russakovsky et al., 2015). Our network is implemented on both TensorFlow (Abadi & et. al o, 2015) and Caffe (Jia et al., 2014) frameworks.
4.3 CIFAR-10
5 EXPERIMENTS
CIFAR-10 is an image classiï¬cation benchmark containing images of size 32Ã32RGB pixels in a training set of 50000 and a test set of 10000. ResNet (He et al., 2015) structure is used for our experiments.
We use parameters pre-trained from a full precision ResNet to initialize our model. Learning rate is set to 0.1 at beginning and scaled by 0.1 at epoch 80, 120 and 300. A L2-normalized weight decay | 1612.01064#15 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 16 | â Full precision â Binary weight (DoReFa-Net) â Ternary weight (Ours)
2 | eA UNO iS £459 5 15% c & 12.5% s 3 10% $ 7.5% 5% 0 50 100 150 Epochs
Figure 3: ResNet-20 on CIFAR-10 with different weight precision.
5
Published as a conference paper at ICLR 2017
of 0.0002 is used as regularizer. Most of our models converge after 160 epochs. We take a moving average on errors of all epochs to ï¬lter off ï¬uctuations when reporting error rate. | 1612.01064#16 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 17 | We compare our model with the full-precision model and a binary-weight model. We train a a full precision ResNet (He et al., 2016) on CIFAR-10 as the baseline (blue line in Figure 3). We ï¬ne-tune the trained baseline network as a 1-32-32 DoReFa-Net where weights are 1 bit and both activations and gradients are 32 bits giving a signiï¬cant loss of accuracy (green line) . Finally, we ï¬ne-tuning the baseline with trained ternary weights (red line). Our model has substantial accuracy improvement over the binary weight model, and our loss of accuracy over the full precision model is small. We also compare our model to Tenary Weight Network (TWN) on ResNet-20. Result shows our model improves the accuracy by â¼ 0.25% on CIFAR-10. | 1612.01064#17 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 18 | We expand our experiments to ternarize ResNet with 32, 44 and 56 layers. All ternary models are ï¬ne-tuned from full precision models. Our results show that we improve the accuracy of ResNet-32, ResNet-44 and ResNet-56 by 0.04%, 0.16% and 0.36% . The deeper the model, the larger the improvement. We conjecture that this is due to ternary weights providing the right model capacity and preventing overï¬tting for deeper networks.
Model ResNet-20 ResNet-32 ResNet-44 ResNet-56 Full resolution 8.23 7.67 7.18 6.80 Ternary (Ours) 8.87 7.63 7.02 6.44 Improvement -0.64 0.04 0.16 0.36
Table 1: Error rates of full-precision and ternary ResNets on Cifar-10
5.1 IMAGENET | 1612.01064#18 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 19 | Table 1: Error rates of full-precision and ternary ResNets on Cifar-10
5.1 IMAGENET
We further train and evaluate our model on ILSVRC12(Russakovsky et al. (2015)). ILSVRC12 is a 1000-category dataset with over 1.2 million images in training set and 50 thousand images in validation set. Images from ILSVRC12 also have various resolutions. We used a variant of AlexNet(Krizhevsky et al. (2012)) structure by removing dropout layers and add batch normalization(Ioffe & Szegedy, 2015) for all models in our experiments. The same variant is also used in experiments described in the paper of DoReFa-Net. | 1612.01064#19 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 20 | Our ternary model of AlexNet uses full precision weights for the ï¬rst convolution layer and the last fully-connected layer. Other layer parameters are all quantized to ternary values. We train our model on ImageNet from scratch using an Adam optimizer (Kingma & Ba (2014)). Minibatch size is set to 128. Learning rate starts at 10â4 and is scaled by 0.2 at epoch 56 and 64. A L2-normalized weight decay of 5 à 10â6 is used as a regularizer. Images are ï¬rst resized to 256 à 256 then randomly cropped to 224 à 224 before input. We report both top 1 and top 5 error rate on validation set. | 1612.01064#20 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 21 | We compare our model to a full precision baseline, 1-32-32 DoReFa-Net and TWN. After around 64 epochs, validation error of our model dropped signiï¬cantly compared to other low-bit networks as well as the full precision baseline. Finally our model reaches top 1 error rate of 42.5%, while DoReFa-Net gets 46.1% and TWN gets 45.5%. Furthermore, our model still outperforms full precision AlexNet (the batch normalization version, 44.1% according to paper of DoReFa-Net) by 1.6%, and is even better than the best AlexNet results reported (42.8%1). The complete results are listed in Table 2.
Error Top1 Top5 Full precision 42.8% 19.7% 1-bit (DoReFa) 46.1% 23.7% 2-bit 2-bit (TWN) (Ours) 45.5% 42.5% 23.2% 20.3%
Table 2: Top1 and Top5 error rate of AlexNet on ImageNet
# 1https://github.com/BVLC/caffe/wiki/Models-accuracy-on-ImageNet-2012-val
6
Published as a conference paper at ICLR 2017 | 1612.01064#21 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 22 | 6
Published as a conference paper at ICLR 2017
â DoReFa-Net â TWN â Ours --- Full precision (with Dropout) 80% Train Validation 60% > Top1 40% 42.8% Tops 20% 19.8% 0%
Figure 4: Training and validation accuracy of AlexNet on ImageNet
We draw the process of training in Figure 4, the baseline results of AlexNet are marked with dashed lines. Our ternary model effectively reduces the gap between training and validation performance, which appears to be quite great for DoReFa-Net and TWN. This indicates that adopting trainable W p l and W n
We also report the results of our methods on ResNet-18B in Table 3. The full-precision error rates are obtained from Facebookâs implementation. Here we cite Binarized Weight Network(BWN)Rastegari et al. (2016) results with all layers quantized and TWN ï¬netuned based on a full precision network, while we train our TTQ model from scratch. Compared with BWN and TWN, our method obtains a substantial improvement.
Error Top1 Top5 Full precision 30.4% 10.8% 2-bit 2-bit 1-bit (BWN) (Ours) (TWN) 39.2% 34.7% 33.4% 17.0% 13.8% 12.8% | 1612.01064#22 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 23 | Table 3: Top1 and Top5 error rate of ResNet-18 on ImageNet
# 6 DISCUSSION
In this section we analyze performance of our model with regard to weight compression and inference speeding up. These two goals are achieved through reducing bit precision and introducing sparsity. We also visualize convolution kernels in quantized convolution layers to ï¬nd that basic patterns of edge/corner detectors are also well learned from scratch even precision is low.
6.1 SPATIAL AND ENERGY EFFICIENCY
We save storage for models by 16à by using ternary weights. Although switching from a binary- weight network to a ternary-weight network increases bits per weight, it brings sparsity to the weights, which gives potential to skip the computation on zero weights and achieve higher energy efï¬ciency.
6.1.1 TRADE-OFF BETWEEN SPARSITY AND ACCURACY
Figure 5 shows the relationship between sparsity and accuracy. As the sparsity of weights grows from 0 (a pure binary-weight network) to 0.5 (a ternary network with 50% zeros), both the training and validation error decrease. Increasing sparsity beyond 50% reduces the model capacity too far, increasing error. Minimum error occurs with sparsity between 30% and 50%. | 1612.01064#23 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 26 | 18% 16% 14% 12% 10% _ Full Precision 8% 8% Error Rate 6% 4% 2% 0% w/o pruning 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Sparsity: percentage of zero weights Figure 5: v.s. Sparsity on ResNet-20
Figure 5: Accuracy v.s. Sparsity on ResNet-20
# 6.1.2 SPARSITY AND EFFICIENCY OF ALEXNET
We further analyze parameters from our AlexNet model. We calculate layer-wise density (complement of sparsity) as shown in Table 4. Despite we use different W p for each layer, ternary weights can be pre-computed when fetched from memory, thus multiplications during convolution and inner product process are still saved. Compared to Deep Compression, we accelerate inference speed using ternary values and more importantly, we reduce energy consumption of inference by saving memory references and multiplications, while achieving higher accuracy.
We notice that without all quantized layers sharing the same t for Equation 9, our model achieves considerable sparsity in convolution layers where the majority of computations takes place. Therefore we are able to squeeze forward time to less than 30% of full precision networks. | 1612.01064#26 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 27 | As for spatial compression, by substituting 32-bit weights with 2-bit ternary weights, our model is approximately 16Ã smaller than original 32-bit AlexNet.
6.2 KERNEL VISUALIZATION
We visualize quantized convolution kernels in Figure 6. The left matrix is kernels from the second convolution layer (5 à 5) and the right one is from the third (3 à 3). We pick ï¬rst 10 input channels and ï¬rst 10 output channels to display for each layer. Grey, black and white color represent zero, negative and positive weights respectively.
We observe similar ï¬lter patterns as full precision AlexNet. Edge and corner detectors of various directions can be found among listed kernels. While these patterns are important for convolution neural networks, the precision of each weight is not. Ternary value ï¬lters are capable enough extracting key features after a full precision ï¬rst convolution layer while saving unnecessary storage.
Furthermore, we ï¬nd that there are a number of empty ï¬lters (all zeros) or ï¬lters with single non-zero value in convolution layers. More aggressive pruning can be applied to prune away these redundant kernels to further compress and speed up our model. | 1612.01064#27 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 28 | Layer conv1 conv2 conv3 conv4 conv5 conv total fc1 fc2 fc3 fc total All total Pruning (NIPSâ15) Density Width Density Width 8 bit 100% 32 bit 8 bit 100% 32 bit 8 bit 100% 32 bit 8 bit 100% 32 bit 8 bit 100% 32 bit 100% - 5 bit 100% 32 bit 5 bit 100% 32 bit 5 bit 100% 32 bit - 100% - 100% Full precision 84% 38% 35% 37% 37% 37% 9% 9% 25% 10% 11% - - - Ours Density Width 32 bit 100% 2 bit 23% 2 bit 24% 2 bit 40% 2 bit 43% - 33% 2 bit 30% 2 bit 36% 32 bit 100% - 37% - 37%
Table 4: Alexnet layer-wise sparsity
8
Published as a conference paper at ICLR 2017
Published as a conference paper at ICLR 2017
Figure 6: Visualization of kernels from Ternary AlexNet trained from Imagenet.
# 7 CONCLUSION | 1612.01064#28 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 29 | Published as a conference paper at ICLR 2017
Figure 6: Visualization of kernels from Ternary AlexNet trained from Imagenet.
# 7 CONCLUSION
We introduce a novel neural network quantization method that compresses network weights to ternary values. We introduce two trained scaling coefï¬cients W l n for each layer and train these coefï¬cients using back-propagation. During training, the gradients are back-propagated both to the latent full-resolution weights and to the scaling coefï¬cients. We use layer-wise thresholds that are proportional to the maximum absolute values to quantize the weights. When deploying the ternary network, only the ternary weights and scaling coefï¬cients are needed, which reducing parameter size by at least 16Ã. Experiments show that our model reaches or even surpasses the accuracy of full precision models on both CIFAR-10 and ImageNet dataset. On ImageNet we exceed the accuracy of prior ternary networks (TWN) by 3%.
9
Published as a conference paper at ICLR 2017
# REFERENCES
MartÃn Abadi and et. al o. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorï¬ow.org. | 1612.01064#29 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 30 | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015.
Matthieu Courbariaux, Itay Hubara, COM Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training neural networks with weights and activations constrained to+ 1 or-.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks In Advances in Neural Information Processing Systems, pp. with binary weights during propagations. 3123â3131, 2015.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. | 1612.01064#30 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 31 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.
Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural net- works: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar- rama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. | 1612.01064#31 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 32 | Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. In F. Pereira, C. classiï¬cation with and pp. URL http://papers.nips.cc/paper/ Imagenet deep convolutional neural networks. K. Q. Weinberger 1097â1105. Curran Associates, 4824-imagenet-classification-with-deep-convolutional-neural-networks. pdf. J. C. Burges, L. Bottou, Information Processing Systems 25, (eds.), Advances Inc., in Neural 2012.
Fengfu Li and Bin Liu. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016.
Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. arXiv preprint arXiv:1603.05279, 2016. | 1612.01064#32 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1612.01064 | 33 | Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015. doi: 10.1007/s11263-015-0816-y.
Shuchang Zhou, Zekun Ni, Xinyu Zhou, He Wen, Yuxin Wu, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016.
10 | 1612.01064#33 | Trained Ternary Quantization | Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%. | http://arxiv.org/pdf/1612.01064 | Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally | cs.LG | Accepted for Poster Presentation on ICLR 2017 | null | cs.LG | 20161204 | 20170223 | [
{
"id": "1502.03167"
},
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1510.03009"
},
{
"id": "1609.07061"
},
{
"id": "1603.05027"
},
{
"id": "1603.05279"
},
{
"id": "1512.02595"
},
{
"id": "1512.03385"
}
] |
1611.10012 | 1 | # Abstract
The goal of this paper is to serve as a guide for se- lecting a detection architecture that achieves the right speed/memory/accuracy balance for a given application and platform. To this end, we investigate various ways to trade accuracy for speed and memory usage in modern con- volutional object detection systems. A number of successful systems have been proposed in recent years, but apples-to- apples comparisons are difï¬cult due to different base fea- ture extractors (e.g., VGG, Residual Networks), different default image resolutions, as well as different hardware and software platforms. We present a uniï¬ed implementation of the Faster R-CNN [31], R-FCN [6] and SSD [26] systems, which we view as âmeta-architecturesâ and trace out the speed/accuracy trade-off curve created by using alterna- tive feature extractors and varying other critical parameters such as image size within each of these meta-architectures. On one extreme end of this spectrum where speed and mem- ory are critical, we present a detector that achieves real time speeds and can be deployed on a mobile device. On the opposite end in which accuracy is critical, we present a detector that achieves state-of-the-art performance mea- sured on the COCO detection task.
# 1. Introduction | 1611.10012#1 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 2 | # 1. Introduction
A lot of progress has been made in recent years on object detection due to the use of convolutional neural networks (CNNs). Modern object detectors based on these networks â such as Faster R-CNN [31], R-FCN [6], Multibox [40], SSD [26] and YOLO [29] â are now good enough to be deployed in consumer products (e.g., Google Photos, Pin- terest Visual Search) and some have been shown to be fast enough to be run on mobile devices.
However, it can be difï¬cult for practitioners to decide what architecture is best suited to their application. Stan- dard accuracy metrics, such as mean average precision (mAP), do not tell the entire story, since for real deploy- ments of computer vision systems, running time and mem- ory usage are also critical. For example, mobile devices often require a small memory footprint, and self driving
cars require real time performance. Server-side production systems, like those used in Google, Facebook or Snapchat, have more leeway to optimize for accuracy, but are still sub- ject to throughput constraints. While the methods that win competitions, such as the COCO challenge [25], are opti- mized for accuracy, they often rely on model ensembling and multicrop methods which are too slow for practical us- age. | 1611.10012#2 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 3 | Unfortunately, only a small subset of papers (e.g., R- FCN [6], SSD [26] YOLO [29]) discuss running time in any detail. Furthermore, these papers typically only state that they achieve some frame-rate, but do not give a full picture of the speed/accuracy trade-off, which depends on many other factors, such as which feature extractor is used, input image sizes, etc.
In this paper, we seek to explore the speed/accuracy trade-off of modern detection systems in an exhaustive and fair way. While this has been studied for full image clas- siï¬cation( (e.g., [3]), detection models tend to be signif- icantly more complex. We primarily investigate single- model/single-pass detectors, by which we mean models that do not use ensembling, multi-crop methods, or other âtricksâ such as horizontal ï¬ipping. In other words, we only pass a single image through a single network. For simplicity (and because it is more important for users of this technol- ogy), we focus only on test-time performance and not on how long these models take to train. | 1611.10012#3 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 4 | Though it is impractical to compare every recently pro- posed detection system, we are fortunate that many of the leading state of the art approaches have converged on a common methodology (at least at a high level). This has allowed us to implement and compare a large number of de- tection systems in a uniï¬ed manner. In particular, we have created implementations of the Faster R-CNN, R-FCN and SSD meta-architectures, which at a high level consist of a single convolutional network, trained with a mixed regres- sion and classiï¬cation objective, and use sliding window style predictions.
To summarize, our main contributions are as follows:
⢠We provide a concise survey of modern convolutional
1
detection systems, and describe how the leading ones follow very similar designs. | 1611.10012#4 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 5 | To summarize, our main contributions are as follows:
⢠We provide a concise survey of modern convolutional
1
detection systems, and describe how the leading ones follow very similar designs.
We describe our ï¬exible and uniï¬ed implementation of three meta-architectures (Faster R-CNN, R-FCN and SSD) in Tensorï¬ow which we use to do exten- sive experiments that trace the accuracy/speed trade- off curve for different detection systems, varying meta- architecture, feature extractor, image resolution, etc. ⢠Our ï¬ndings show that using fewer proposals for Faster R-CNN can speed it up signiï¬cantly without a big loss in accuracy, making it competitive with its faster cousins, SSD and RFCN. We show that SSDs performance is less sensitive to the quality of the fea- ture extractor than Faster R-CNN and R-FCN. And we identify sweet spots on the accuracy/speed trade-off curve where gains in accuracy are only possible by sac- riï¬cing speed (within the family of detectors presented here).
⢠Several of the meta-architecture and feature-extractor combinations that we report have never appeared be- fore in literature. We discuss how we used some of these novel combinations to train the winning entry of the 2016 COCO object detection challenge. | 1611.10012#5 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 6 | # 2. Meta-architectures
Neural nets have become the leading method for high quality object detection in recent years. In this section we survey some of the highlights of this literature. The R-CNN paper by Girshick et al. [11] was among the ï¬rst modern incarnations of convolutional network based detection. In- spired by recent successes on image classiï¬cation [20], the R-CNN method took the straightforward approach of crop- ping externally computed box proposals out of an input im- age and running a neural net classiï¬er on these crops. This approach can be expensive however because many crops are necessary, leading to signiï¬cant duplicated computation from overlapping crops. Fast R-CNN [10] alleviated this problem by pushing the entire image once through a feature extractor then cropping from an intermediate layer so that crops share the computation load of feature extraction. | 1611.10012#6 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 7 | While both R-CNN and Fast R-CNN relied on an exter- nal proposal generator, recent works have shown that it is possible to generate box proposals using neural networks as well [41, 40, 8, 31]. In these works, it is typical to have a collection of boxes overlaid on the image at different spatial locations, scales and aspect ratios that act as âanchorsâ (sometimes called âpriorsâ or âdefault boxesâ). A model is then trained to make two predictions for each anchor: (1) a discrete class prediction for each anchor, and (2) a continuous prediction of an offset by which the anchor needs to be shifted to ï¬t the groundtruth bounding box.
Papers that follow this anchors methodology then
2 | 1611.10012#7 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 8 | Papers that follow this anchors methodology then
2
minimize a combined classiï¬cation and regression loss that we now describe. For each anchor a, we ï¬rst ï¬nd the best matching groundtruth box b (if one exists). If such a match can be found, we call a a âpositive anchorâ, and assign it (1) a class label ya â {1 . . . K} and (2) a vector encoding of box b with respect to anchor a (called the box encoding If no match is found, we call a a ânegative Ï(ba; a)). anchorâ and we set the class label to be ya = 0. If for the anchor a we predict box encoding floc(I; a, θ) and corresponding class fcls(I; a, θ), where I is the image and θ the model parameters, then the loss for a is measured as a weighted sum of a location-based loss and a classiï¬cation loss:
L(a,Z; 0) = a: U[ais positive] « Lioc(¢(ba; a) â fioc(Z; a, 4)) +8 bets(Ya, feis(Z; a, 4)), (dd) | 1611.10012#8 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 9 | where α, β are weights balancing localization and classi- ï¬cation losses. To train the model, Equation 1 is averaged over anchors and minimized with respect to parameters θ.
The choice of anchors has signiï¬cant implications both for accuracy and computation. In the (ï¬rst) Multibox paper [8], these anchors (called âbox priorsâ by the au- thors) were generated by clustering groundtruth boxes in the dataset. In more recent works, anchors are generated by tiling a collection of boxes at different scales and aspect ratios regularly across the image. The advantage of hav- ing a regular grid of anchors is that predictions for these boxes can be written as tiled predictors on the image with shared parameters (i.e., convolutions) and are reminiscent of traditional sliding window methods, e.g. [44]. The Faster R-CNN [31] paper and the (second) Multibox paper [40] (which called these tiled anchors âconvolutional priorsâ) were the ï¬rst papers to take this new approach.
# 2.1. Meta-architectures | 1611.10012#9 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 10 | # 2.1. Meta-architectures
In our paper we focus primarily on three recent (meta)- architectures: SSD (Single Shot Multibox Detector [26]), Faster R-CNN [31] and R-FCN (Region-based Fully Con- volutional Networks [6]). While these papers were orig- inally presented with a particular feature extractor (e.g., VGG, Resnet, etc), we now review these three methods, de- coupling the choice of meta-architecture from feature ex- tractor so that conceptually, any feature extractor can be used with SSD, Faster R-CNN or R-FCN.
# 2.1.1 Single Shot Detector (SSD).
Though the SSD paper was published only recently (Liu et al., [26]), we use the term SSD to refer broadly to archi- tectures that use a single feed-forward convolutional net- work to directly predict classes and anchor offsets without requiring a second stage per-proposal classiï¬cation oper- ation (Figure 1a). Under this deï¬nition, the SSD meta- architecture has been explored in a number of precursors to [26]. Both Multibox and the Region Proposal Network | 1611.10012#10 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 11 | Paper Szegedy et al. [40] Redmon et al. [29] Ren et al. [31] He et al. [13] Liu et al. [26] (v1) Liu et al. [26] (v2, v3) Dai et al [6] Meta-architecture SSD SSD Faster R-CNN Faster R-CNN SSD SSD R-FCN Feature Extractor InceptionV3 Custom (GoogLeNet inspired) VGG ResNet-101 InceptionV3 VGG ResNet-101 Matching Bipartite Box Center Argmax Argmax Argmax Argmax Argmax Box Encoding Ï(ba, a) [x0, y0, x1, y1] â â [xc, yc, h] , yc ha , yc ha [x0, y0, x1, y1] , yc ha , yc ha w, [ xc wa [ xc wa , log w, log h] , log w, log h] [ xc wa [ xc wa , log w, log h] , log w, log h] Location Loss functions L2 L2 SmoothL1 SmoothL1 L2 SmoothL1 SmoothL1 | 1611.10012#11 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 12 | Table 1: Convolutional detection models that use one of the meta-architectures described in Section 2. Boxes are encoded with respect to a matching anchor a via a function Ï (Equation 1), where [x0, y0, x1, y1] are min/max coordinates of a box, xc, yc are its center coordinates, and w, h its width and height. In some cases, wa, ha, width and height of the matching anchor are also used. Notes: (1) We include an early arXiv version of [26], which used a different conï¬guration from that published at ECCV 2016; (2) [29] uses a fast feature extractor described as being inspired by GoogLeNet [39], which we do not compare to; (3) YOLO matches a groundtruth box to an anchor if its center falls inside the anchor (we refer to this as BoxCenter).
(a) SSD. (b) Faster RCNN. (c) R-FCN.
; a AS:
iP â â_ Gg
animate p sete are
Figure 1: High level diagrams of the detection meta-architectures compared in this paper. | 1611.10012#12 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 13 | ; a AS:
iP â â_ Gg
animate p sete are
Figure 1: High level diagrams of the detection meta-architectures compared in this paper.
(RPN) stage of Faster R-CNN [40, 31] use this approach to predict class-agnostic box proposals. [33, 29, 30, 9] use SSD-like architectures to predict ï¬nal (1 of K) class labels. And Poirson et al., [28] extended this idea to predict boxes, classes and pose.
ticularly inï¬uential, and has led to a number of follow-up works [2, 35, 34, 46, 13, 5, 19, 45, 24, 47] (including SSD and R-FCN). Notably, half of the submissions to the COCO object detection server as of November 2016 are reported to be based on the Faster R-CNN system in some way.
# 2.1.2 Faster R-CNN. | 1611.10012#13 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 14 | # 2.1.2 Faster R-CNN.
In the Faster R-CNN setting, detection happens in two stages (Figure 1b). In the ï¬rst stage, called the region pro- posal network (RPN), images are processed by a feature extractor (e.g., VGG-16), and features at some selected in- termediate level (e.g., âconv5â) are used to predict class- agnostic box proposals. The loss function for this ï¬rst stage takes the form of Equation 1 using a grid of anchors tiled in space, scale and aspect ratio. | 1611.10012#14 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 15 | In the second stage, these (typically 300) box proposals are used to crop features from the same intermediate feature map which are subsequently fed to the remainder of the fea- ture extractor (e.g., âfc6â followed by âfc7â) in order to pre- dict a class and class-speciï¬c box reï¬nement for each pro- posal. The loss function for this second stage box classiï¬er also takes the form of Equation 1 using the proposals gener- ated from the RPN as anchors. Notably, one does not crop proposals directly from the image and re-run crops through the feature extractor, which would be duplicated computa- tion. However there is part of the computation that must be run once per region, and thus the running time depends on the number of regions proposed by the RPN.
# 2.2. R-FCN | 1611.10012#15 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 16 | # 2.2. R-FCN
While Faster R-CNN is an order of magnitude faster than Fast R-CNN, the fact that the region-speciï¬c component must be applied several hundred times per image led Dai et al. [6] to propose the R-FCN (Region-based Fully Con- volutional Networks) method which is like Faster R-CNN, but instead of cropping features from the same layer where region proposals are predicted, crops are taken from the last layer of features prior to prediction (Figure 1c). This approach of pushing cropping to the last layer minimizes the amount of per-region computation that must be done. Dai et al. argue that the object detection task needs local- ization representations that respect translation variance and thus propose a position-sensitive cropping mechanism that is used instead of the more standard ROI pooling operations used in [10, 31] and the differentiable crop mechanism of [5]. They show that the R-FCN model (using Resnet 101) could achieve comparable accuracy to Faster R-CNN often at faster running times. Recently, the R-FCN model was also adapted to do instance segmentation in the recent TA- FCN model [22], which won the 2016 COCO instance seg- mentation challenge.
Since appearing in 2015, Faster R-CNN has been par3
# 3. Experimental setup | 1611.10012#16 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 17 | Since appearing in 2015, Faster R-CNN has been par3
# 3. Experimental setup
The introduction of standard benchmarks such as Im- agenet [32] and COCO [25] has made it easier in recent years to compare detection methods with respect to ac- curacy. However, when it comes to speed and memory, apples-to-apples comparisons have been harder to come by. Prior works have relied on different deep learning frame- works (e.g., DistBelief [7], Caffe [18], Torch [4]) and dif- ferent hardware. Some papers have optimized for accuracy; others for speed. And ï¬nally, in some cases, metrics are reported using slightly different training sets (e.g., COCO training set vs. combined training+validation sets). | 1611.10012#17 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 18 | In order to better perform apples-to-apples comparisons, we have created a detection platform in Tensorï¬ow [1] and have recreated training pipelines for SSD, Faster R-CNN and R-FCN meta-architectures on this platform. Having a uniï¬ed framework has allowed us to easily swap feature ex- tractor architectures, loss functions, and having it in Ten- sorï¬ow allows for easy portability to diverse platforms for deployment. In the following we discuss ways to conï¬gure model architecture, loss function and input on our platform â knobs that can be used to trade speed and accuracy.
# 3.1. Architectural conï¬guration
# 3.1.1 Feature extractors.
In all of the meta-architectures, we ï¬rst apply a convolu- tional feature extractor to the input image to obtain high- level features. The choice of feature extractor is crucial as the number of parameters and types of layers directly affect memory, speed, and performance of the detector. We have selected six representative feature extractors to compare in this paper and, with the exception of MobileNet [14], all have open source Tensorï¬ow implementations and have had sizeable inï¬uence on the vision community. | 1611.10012#18 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 19 | In more detail, we consider the following six feature ex- tractors. We use VGG-16 [37] and Resnet-101 [13], both of which have won many competitions such as ILSVRC and COCO 2015 (classiï¬cation, detection and segmentation). We also use Inception v2 [16], which set the state of the art in the ILSVRC 2014 classiï¬cation and detection challenges, as well as its successor Inception v3 [42]. Both of the In- ception networks employed âInception unitsâ which made it possible to increase the depth and width of a network with- out increasing its computational budget. Recently, Szegedy et al. [38] proposed Inception Resnet (v2), which combines the optimization beneï¬ts conferred by residual connections with the computation efï¬ciency of Inception units. Fi- nally, we compare against the new MobileNet network [14], which has been shown to achieve VGG-16 level accuracy on Imagenet with only 1/30 of the computational cost and model size. MobileNet is designed for efï¬cient inference in various mobile vision applications. Its building blocks are
4 | 1611.10012#19 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 20 | 4
depthwise separable convolutions which factorize a stan- dard convolution into a depthwise convolution and a 1 Ã 1 convolution, effectively reducing both computational cost and number of parameters.
For each feature extractor, there are choices to be made in order to use it within a meta-architecture. For both Faster R-CNN and R-FCN, one must choose which layer to use for predicting region proposals. In our experiments, we use the choices laid out in the original papers when possible. For example, we use the âconv5â layer from VGG-16 [31] and the last layer of conv 4 x layers in Resnet-101 [13]. For other feature extractors, we have made analogous choices. See supplementary materials for more details. | 1611.10012#20 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 21 | Liu et al. [26] showed that in the SSD setting, using multiple feature maps to make location and conï¬dence pre- dictions at multiple scales is critical for good performance. For VGG feature extractors, they used conv4 3, fc7 (con- verted to a convolution layer), as well as a sequence of added layers. In our experiments, we follow their method- ology closely, always selecting the topmost convolutional feature map and a higher resolution feature map at a lower level, then adding a sequence of convolutional layers with spatial resolution decaying by a factor of 2 with each addi- tional layer used for prediction. However unlike [26], we use batch normalization in all additional layers.
For comparison, feature extractors used in previous works are shown in Table 1. In this work, we evaluate all combinations of meta-architectures and feature extractors, most of which are novel. Notably, Inception networks have never been used in Faster R-CNN frameworks and until re- cently were not open sourced [36]. Inception Resnet (v2) and MobileNet have not appeared in the detection literature to date.
# 3.1.2 Number of proposals. | 1611.10012#21 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 22 | # 3.1.2 Number of proposals.
For Faster R-CNN and R-FCN, we can also choose the number of region proposals to be sent to the box classiï¬er at test time. Typically, this number is 300 in both settings, but an easy way to save computation is to send fewer boxes po- tentially at the risk of reducing recall. In our experiments, we vary this number of proposals between 10 and 300 in order to explore this trade-off.
# 3.1.3 Output stride settings for Resnet and Inception Resnet.
Our implementation of Resnet-101 is slightly modiï¬ed from the original to have an effective output stride of 16 instead of 32; we achieve this by modifying the conv5 1 layer to have stride 1 instead of 2 (and compensating for re- duced stride by using atrous convolutions in further layers) as in [6]. For Faster R-CNN and R-FCN, in addition to the | 1611.10012#22 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 23 | default stride of 16, we also experiment with a (more ex- pensive) stride 8 Resnet-101 in which the conv4 1 block is additionally modiï¬ed to have stride 1. Likewise, we exper- iment with stride 16 and stride 8 versions of the Inception Resnet network. We ï¬nd that using stride 8 instead of 16 improves the mAP by a factor of 5%1, but increased run- ning time by a factor of 63%.
# 3.2. Loss function conï¬guration
Beyond selecting a feature extractor, there are choices in conï¬guring the loss function (Equation 1) which can impact training stability and ï¬nal performance. Here we describe the choices that we have made in our experiments and Ta- ble 1 again compares how similar loss functions are conï¬g- ured in other works.
# 3.2.1 Matching. | 1611.10012#23 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 24 | # 3.2.1 Matching.
Determining classiï¬cation and regression targets for each anchor requires matching anchors to groundtruth instances. Common approaches include greedy bipartite matching (e.g., based on Jaccard overlap) or many-to-one matching strategies in which bipartite-ness is not required, but match- ings are discarded if Jaccard overlap between an anchor and groundtruth is too low. We refer to these strategies as Bipartite or Argmax, respectively. In our experiments we use Argmax matching throughout with thresholds set as suggested in the original paper for each meta-architecture. After matching, there is typically a sampling procedure de- signed to bring the number of positive anchors and negative anchors to some desired ratio. In our experiments, we also ï¬x these ratios to be those recommended by the paper for each meta-architecture.
# 3.2.2 Box encoding.
To encode a groundtruth box with respect to its matching anchor, we use the box encoding function Ï(ba; a) = [10 · xc , 5·log w, 5·log h] (also used by [11, 10, 31, 26]). wa Note that the scalar multipliers 10 and 5 are typically used in all of these prior works, even if not explicitly mentioned.
# 3.2.3 Location loss (¢;,,.). | 1611.10012#24 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 25 | # 3.2.3 Location loss (¢;,,.).
Following [10, 31, 26], we use the Smooth L1 (or Hu- ber [15]) loss function in all experiments.
# 3.3. Input size conï¬guration.
In Faster R-CNN and R-FCN, models are trained on im- ages scaled to M pixels on the shorter edge whereas in SSD, images are always resized to a ï¬xed shape M à M . We explore evaluating each model on downscaled images as
1 i.e., (map8 - map16) / map16 = 0.05.
5
a way to trade accuracy for speed. In particular, we have trained high and low-resolution versions of each model. In the âhigh-resolutionâ settings, we set M = 600, and in the âlow-resolutionâ setting, we set M = 300. In both cases, this means that the SSD method processes fewer pix- els on average than a Faster R-CNN or R-FCN model with all other variables held constant.
# 3.4. Training and hyperparameter tuning | 1611.10012#25 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 26 | # 3.4. Training and hyperparameter tuning
We jointly train all models end-to-end using asyn- chronous gradient updates on a distributed cluster [7]. For Faster RCNN and R-FCN, we use SGD with momentum with batch sizes of 1 (due to these models being trained using different image sizes) and for SSD, we use RM- SProp [43] with batch sizes of 32 (in a few exceptions we reduced the batch size for memory reasons). Finally we manually tune learning rate schedules individually for each feature extractor. For the model conï¬gurations that match works in literature ([31, 6, 13, 26]), we have reproduced or surpassed the reported mAP results.2 | 1611.10012#26 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 27 | Note that for Faster R-CNN and R-FCN, this end-to- end approach is slightly different from the 4-stage train- ing procedure that is typically used. Additionally, in- stead of using the ROI Pooling layer and Position-sensitive ROI Pooling layers used by [31, 6], we use Tensorï¬owâs âcrop and resizeâ operation which uses bilinear interpola- tion to resample part of an image onto a ï¬xed sized grid. This is similar to the differentiable cropping mechanism of [5], the attention model of [12] as well as the Spatial Transformer Network [17]. However we disable backpropa- gation with respect to bounding box coordinates as we have found this to be unstable during training.
Our networks are trained on the COCO dataset, using all training images as well as a subset of validation images, holding out 8000 examples for validation.3 Finally at test time, we post-process detections with non-max suppression using an IOU threshold of 0.6 and clip all boxes to the image window. To evaluate our ï¬nal detections, we use the ofï¬cial COCO API [23], which measures mAP averaged over IOU thresholds in [0.5 : 0.05 : 0.95], amongst other metrics. | 1611.10012#27 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 28 | # 3.5. Benchmarking procedure
To time our models, we use a machine with 32GB RAM, Intel Xeon E5-1650 v2 processor and an Nvidia GeForce GTX Titan X GPU card. Timings are reported on GPU for a batch size of one. The images used for timing are resized so that the smallest size is at least k and then cropped to
2In the case of SSD with VGG, we have reproduced the number re- ported in the ECCV version of the paper, but the most recent version on ArXiv uses an improved data augmentation scheme to obtain somewhat higher numbers, which we have not yet experimented with.
3We remark that this dataset is similar but slightly smaller than the trainval35k set that has been used in several papers, e.g., [2, 26].
k à k where k is either 300 or 600 based on the model. We average the timings over 500 images. | 1611.10012#28 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 29 | k à k where k is either 300 or 600 based on the model. We average the timings over 500 images.
We include postprocessing in our timing (which includes non-max suppression and currently runs only on the CPU). Postprocessing can take up the bulk of the running time for the fastest models at â¼ 40ms and currently caps our maximum framerate to 25 frames per second. Among other things, this means that while our timing results are compa- rable amongst each other, they may not be directly compara- ble to other reported speeds in the literature. Other potential differences include hardware, software drivers, framework (Tensorï¬ow in our case), and batch size (e.g., the Liu et al. [26] report timings using batch sizes of 8). Finally, we use tfprof [27] to measure the total memory demand of the models during inference; this gives a more platform inde- pendent measure of memory demand. We also average the memory measurements over three images.
# 3.6. Model Details
Table 2 summarizes the feature extractors that we use. All models are pretrained on ImageNet-CLS. We give de- tails on how we train the object detectors using these feature extractors below.
# 3.6.1 Faster R-CNN | 1611.10012#29 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 30 | # 3.6.1 Faster R-CNN
implementation of Faster We use Tensorï¬owâs RCNN [31] closely, âcrop and resizeâ operation instead of standard ROI pooling . Except for VGG, all the feature extractors use batch normalization after convolutional layers. We freeze the batch normalization parameters to be those estimated during ImageNet pretraining. We train faster RCNN with asynchronous SGD with momentum of 0.9. The initial learning rates depend on which feature extractor we used, as explained below. We reduce the learning rate by 10x after 900K iterations and another 10x after 1.2M iterations. 9 GPU workers are used during asynchronous training. Each GPU worker takes a single image per iteration; the minibatch size for RPN training is 256, while the minibatch size for box classiï¬er training is 64.
⢠VGG [37]: We extract features from the âconv5â layer whose stride size is 16 pixels. Similar to [5], we crop and resize feature maps to 14x14 then maxpool to 7x7. The initial learning rate is 5e-4. | 1611.10012#30 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 31 | ⢠Resnet 101 [13]: We extract features from the last layer of the âconv4â block. When operating in atrous mode, the stride size is 8 pixels, otherwise it is 16 pix- els. Feature maps are cropped and resized to 14x14 then maxpooled to 7x7. The initial learning rate is 3e- 4.
⢠Inception V2 [16]: We extract features from the âMixed 4eâ layer whose stride size is 16 pixels. Fea6
Model VGG-16 MobileNet Inception V2 ResNet-101 Inception V3 Inception Resnet V2 14,714,688 3,191,072 10,173,112 42,605,504 21,802,784 54,336,736
Table 2: Properties of the 6 feature extractors that we use. Top-1 accuracy is the classiï¬cation accuracy on ImageNet.
ture maps are cropped and resized to 14x14. The initial learning rate is 2e-4.
⢠Inception V3 [42]: We extract features from the âMixed 6eâ layer whose stride size is 16 pixels. Fea- ture maps are cropped and resized to 17x17. The initial learning rate is 3e-4. | 1611.10012#31 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 32 | ⢠Inception Resnet [38]: We extract features the from âMixed 6aâ layer including its associated residual lay- ers. When operating in atrous mode, the stride size is 8 pixels, otherwise is 16 pixels. Feature maps are cropped and resized to 17x17. The initial learning rate is 1e-3. ⢠MobileNet
features from the âConv2d 11â layer whose stride size is 16 pixels. Fea- ture maps are cropped and resized to 14x14. The initial learning rate is 3e-3.
# 3.6.2 R-FCN
We follow the implementation of R-FCN [6] closely, but use Tensorï¬owâs âcrop and resizeâ operation instead of ROI pooling to crop regions from the position-sensitive score maps. All feature extractors use batch normalization after convolutional layers. We freeze the batch normalization pa- rameters to be those estimated during ImageNet pretraining. We train R-FCN with asynchronous SGD with momentum of 0.9. 9 GPU workers are used during asynchronous train- ing. Each GPU worker takes a single image per iteration; the minibatch size for RPN training is 256. As of the time of this submission, we do not have R-FCN results for VGG or Inception V3 feature extractors. | 1611.10012#32 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 33 | ⢠Resnet 101 [13]: We extract features from âblock3â layer. When operating in atrous mode, the stride size is 8 pixels, otherwise it is 16 pixels. Position-sensitive score maps are cropped with spatial bins of size 7x7 and resized to 21x21. We use online hard example mining to sample a minibatch of size 128 for training the box classiï¬er. The initial learning rate is 3e-4. It is reduced by 10x after 1M steps and another 10x after 1.2M steps.
from âMixed 4eâ layer whose stride size is 16 pixels. Position-sensitive score maps are cropped with spatial bins of size 3x3 and resized to 12x12. We use online hard example mining to sample a minibatch of size 128 for training the box classiï¬er. The initial learning rate is 2e-4. It is reduced by 10x after 1.8M steps and an- other 10x after 2M steps. | 1611.10012#33 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 34 | ⢠Inception Resnet [38]: We extract features from âMixed 6aâ layer including its associated residual lay- ers. When operating in atrous mode, the stride size is 8 pixels, otherwise it is 16 pixels. Position-sensitive score maps are cropped with spatial bins of size 7x7 and resized to 21x21. We use all proposals from RPN for box classiï¬er training. The initial learning rate is 7e-4. It is reduced by 10x after 1M steps and another 10x after 1.2M steps.
from âConv2d 11â layer whose stride size is 16 pix- els. Position-sensitive score maps are cropped with spatial bins of size 3x3 and resized to 12x12. We use online hard example mining to sample a minibatch of size 128 for training the box classiï¬er. The initial learning rate is 2e-3. Learning rate is reduced by 10x after 1.6M steps and another 10x after 1.8M steps.
# 3.6.3 SSD | 1611.10012#34 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 35 | # 3.6.3 SSD
As described in the main paper, we follow the methodol- ogy of [26] closely, generating anchors in the same way and selecting the topmost convolutional feature map and a higher resolution feature map at a lower level, then adding a sequence of convolutional layers with spatial resolution decaying by a factor of 2 with each additional layer used for prediction. The feature map selection for Resnet101 is slightly different, as described below.
Unlike [26], we use batch normalization in all additional layers, and initialize weights with a truncated normal distri- bution with a standard deviation of Ï = .03. With the ex- ception of VGG, we also do not perform âlayer normaliza- tionâ (as suggested in [26]) as we found it not to be neces- sary for the other feature extractors. Finally, we employ dis- tributed training with asynchronous SGD using 11 worker machines. Below we discuss the speciï¬cs for each feature extractor that we have considered. As of the time of this submission, we do not have SSD results for the Inception V3 feature extractor and we only have results for high reso- lution SSD models using the Resnet 101 and Inception V2 feature extractors. | 1611.10012#35 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 36 | ⢠VGG [37]: Following the paper, we use conv4 3, and fc7 layers, appending ï¬ve additional convolutional lay- ers with decaying spatial resolution with depths 512,
7
256, 256, 256, 256, respectively. We apply L2 normal- ization to the conv4 3 layer, scaling the feature norm at each location in the feature map to a learnable scale, s, which is initialized to 20.0.
During training, we use a base learning rate of lrbase = .0003, but use a warm-up learning rate scheme in which we ï¬rst train with a learning rate of 0.82 · lrbase for 10K iterations followed by 0.8 · lrbase for another 10K iterations. | 1611.10012#36 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 37 | ⢠Resnet 101 [13]: We use the feature map from the last layer of the âconv4â block. When operating in atrous mode, the stride size is 8 pixels, otherwise it is 16 pix- els. Five additional convolutional layers with decay- ing spatial resolution are appended, which have depths 512, 512, 256, 256, 128, respectively. We have exper- imented with including the feature map from the last layer of the âconv5â block. With âconv5â features, the mAP numbers are very similar, but the computational costs are higher. Therefore we choose to use the last layer of the âconv4â block. During training, a base learning rate of 3e-4 is used. We use a learning rate warm up strategy similar to the VGG one.
⢠Inception V2 [16]: We use Mixed 4c and Mixed 5c, appending four additional convolutional layers with decaying resolution with depths 512, 256, 256, 128 re- spectively. We use ReLU6 as the non-linear activation function for each conv layer. During training, we use a base learning rate of 0.002, followed by learning rate decay of 0.95 every 800k steps. | 1611.10012#37 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 38 | [38]: We use Mixed 6a and Conv2d 7b, appending three additional convolutional layers with decaying resolution with depths 512, 256, 128 respectively. We use ReLU as the non-linear acti- vation function for each conv layer. During training, we use a base learning rate of 0.0005, followed by learning rate decay of 0.95 every 800k steps.
⢠MobileNet [14]: We use conv 11 and conv 13, ap- pending four additional convolutional layers with de- caying resolution with depths 512, 256, 256, 128 re- spectively. The non-linear activation function we use is ReLU6 and both batch norm parameters β and γ are trained. During training, we use a base learning rate of 0.004, followed by learning rate decay of 0.95 every 800k steps.
# 4. Results
In this section we analyze the data that we have collected by training and benchmarking detectors, sweeping over model conï¬gurations as described in Section 3. Each such model conï¬guration includes a choice of meta-architecture, feature extractor, stride (for Resnet and Inception Resnet) as | 1611.10012#38 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 39 | 40 Faster R-CNN w/ResNet, Hi Res, 50 Proposals @ Faster RCNN 35 renw/ ResNet, Hi Res, 100 Proposals Be oe © % ce Ce 30 fe @ E 2 =25 + ea o id e > oP, 20 ? I SSD w/Inception V2, Lo Res 15 SSD w/MobileNet, Lo Res 10 0 200 400 Meta Architecture R-FCN HS rT OTT TOT Te @ ssD Faster R-CNN w/Inception Resnet, Hi Res, 300 Proposals, Stride 8 Feature Extractor Inception Resnet V2 Inception V2 Inception V3 MobileNet Resnet 101 VGG 600 800 1000 GPU Time
Figure 2: Accuracy vs time, with marker shapes indicating meta-architecture and colors indicating feature extractor. Each (meta-architecture, feature extractor) pair can correspond to multiple points on this plot due to changing input sizes, stride, etc.
minival mAP 19.3 22 32 30.4 35.7
test-dev mAP 18.8 21.6 31.9 30.3 35.6
# Table 3: Test-dev performance of the âcriticalâ points along our optimality frontier.
well as input resolution and number of proposals (for Faster R-CNN and R-FCN). | 1611.10012#39 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 40 | well as input resolution and number of proposals (for Faster R-CNN and R-FCN).
For each such model conï¬guration, we measure timings on GPU, memory demand, number of parameters and ï¬oat- ing point operations as described below. We make the entire table of results available in the supplementary material, not- ing that as of the time of this submission, we have included 147 model conï¬gurations; models for a small subset of ex- perimental conï¬gurations (namely some of the high resolu- tion SSD models) have yet to converge, so we have for now omitted them from analysis.
to almost 1 second. Generally we observe that R-FCN and SSD models are faster on average while Faster R-CNN tends to lead to slower but more accurate models, requir- ing at least 100 ms per image. However, as we discuss be- low, Faster R-CNN models can be just as fast if we limit the number of regions proposed. We have also overlaid an imaginary âoptimality frontierâ representing points at which better accuracy can only be attained within this fam- ily of detectors by sacriï¬cing speed. In the following, we highlight some of the key points along the optimality fron- tier as the best detectors to use and discuss the effect of the various model conï¬guration options in isolation. | 1611.10012#40 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 41 | # 4.1. Analyses
# 4.1.1 Accuracy vs time
# 4.1.2 Critical points on the optimality frontier.
Figure 2 is a scatterplot visualizing the mAP of each of our model conï¬gurations, with colors representing feature ex- tractors, and marker shapes representing meta-architecture. Running time per image ranges from tens of milliseconds
(Fastest: SSD w/MobileNet): On the fastest end of this op- timality frontier, we see that SSD models with Inception v2 and Mobilenet feature extractors are most accurate of the fastest models. Note that if we ignore postprocessing
8
32 Meta Architecture @ Faster RCNN 28 @ R-FCN 30 e ssD a, 26 < E24 = 5 22 8 e g js s © 20 e 5 gz 3 18 3 8 3 £ e ° 16 3) 2 ° 14 70 72 74 e a nN 3 = 2 â g 5 3 $ cre ome 5 ono he â g g = = E J: 76 78 80 82 Feature Extractor Accuracy
Figure 3: Accuracy of detector (mAP on COCO) vs accuracy of feature extractor (as measured by top-1 accuracy on ImageNet-CLS). To avoid crowding the plot, we show only the low resolution models. | 1611.10012#41 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 43 | Figure 4: Accuracy stratiï¬ed by object size, meta-architecture and feature extractor, We ï¬x the image resolution to 300.
costs, Mobilenet seems to be roughly twice as fast as In- ception v2 while being slightly worse in accuracy. (Sweet Spot: R-FCN w/Resnet or Faster R-CNN w/Resnet and only 50 proposals): There is an âelbowâ in the middle of the optimality frontier occupied by R-FCN models using Residual Network feature extractors which seem to strike the best balance between speed and accuracy among our model conï¬gurations. As we discuss below, Faster R-CNN w/Resnet models can attain similar speeds if we limit the number of proposals to 50. (Most Accurate: Faster R-CNN w/Inception Resnet at stride 8): Finally Faster R-CNN with dense output Inception Resnet models attain the best possible accuracy on our optimality frontier, achieving, to our knowledge, the state-of-the-art single model performance. However these models are slow, requiring nearly a second of processing time. The overall mAP numbers for these 5 models are shown in Table 3.
# 4.1.3 The effect of the feature extractor. | 1611.10012#43 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 44 | # 4.1.3 The effect of the feature extractor.
Intuitively, stronger performance on classiï¬cation should be positively correlated with stronger performance on COCO detection. To verify this, we investigate the relationship be- tween overall mAP of different models and the Top-1 Ima- genet classiï¬cation accuracy attained by the pretrained fea9
40 Meta Architecture @ Faster RCNN fi R-FCN @ ssp @ C) e 35 ow e@ ee o * Ms 30 ge e t +4 ° = 25 ec a g Cd r) e fs) o,! $ 20 @ 8 15 e O Resolution @ 300 @ 600 10 ie} 200 400 600 800 1000 GPU Time
Figure 5: Effect of image resolution.
ture extractor used to initialize each model. Figure 3 in- dicates that there is indeed an overall correlation between classiï¬cation and detection performance. However this cor- relation appears to only be signiï¬cant for Faster R-CNN and R-FCN while the performance of SSD appears to be less re- liant on its feature extractorâs classiï¬cation accuracy. | 1611.10012#44 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 45 | objects, conï¬rms that high resolution models lead to signif- icantly better mAP results on small objects (by a factor of 2 in many cases) and somewhat better mAP results on large objects as well. We also see that strong performance on small objects implies strong performance on large objects in our models, (but not vice-versa as SSD models do well on large objects but not small).
4.1.4 The effect of object size. Figure 4 shows performance for different models on dif- ferent sizes of objects. Not surprisingly, all methods do much better on large objects. We also see that even though SSD models typically have (very) poor performance on small objects, they are competitive with Faster RCNN and R-FCN on large objects, even outperforming these meta- architectures for the faster and more lightweight feature ex- tractors.
# 4.1.5 The effect of image size.
It has been observed by other authors that input resolution can signiï¬cantly impact detection accuracy. From our ex- periments, we observe that decreasing resolution by a fac- tor of two in both dimensions consistently lowers accuracy (by 15.88% on average) but also reduces inference time by a relative factor of 27.4% on average.
One reason for this effect is that high resolution inputs allow for small objects to be resolved. Figure 5 compares detector performance on large objects against that on small | 1611.10012#45 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 46 | One reason for this effect is that high resolution inputs allow for small objects to be resolved. Figure 5 compares detector performance on large objects against that on small
# 4.1.6 The effect of the number of proposals.
For Faster R-CNN and R-FCN, we can adjust the number of proposals computed by the region proposal network. The authors in both papers use 300 boxes, however, our experi- ments suggest that this number can be signiï¬cantly reduced without harming mAP (by much). In some feature extrac- tors where the âbox classiï¬erâ portion of Faster R-CNN is expensive, this can lead to signiï¬cant computational sav- ings. Figure 6a visualizes this trade-off curve for Faster R- CNN models with high resolution inputs for different fea- ture extractors. We see that Inception Resnet, which has 35.4% mAP with 300 proposals can still have surprisingly high accuracy (29% mAP) with only 10 proposals. The sweet spot is probably at 50 proposals, where we are able to obtain 96% of the accuracy of using 300 proposals while reducing running time by a factor of 3. While the compu- tational savings are most pronounced for Inception Resnet, we see that similar tradeoffs hold for all feature extractors. Figure 6b visualizes the same trade-off curves for R10 | 1611.10012#46 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 47 | (a) FRCNN (b) RFCN
Figure 6: Effect of proposing increasing number of regions on mAP accuracy (solid lines) and GPU inference time (dotted). Surprisingly, for Faster R-CNN with Inception Resnet, we obtain 96% of the accuracy of using 300 proposals by using only 50 proposals, which reduces running time by a factor of 3.
400 200 150 101 3} w ° Faster RCNN Faster RCNN Faster R-FCN RCNN VGG MobileNet R-FCN Inception v2 GPU time (ms) for Resolution=300 fm GPU Time Faster RCNN Faster R-FCN RCNN R-FCN Resnet 101 Inception Resnet V2
Figure 7: GPU time (milliseconds) for each model, for image resolution of 300.
FCN models and shows that the computational savings from using fewer proposals in the R-FCN setting are minimal â this is not surprising as the box classiï¬er (the expen- sive part) is only run once per image. We see in fact that at 100 proposals, the speed and accuracy for Faster R-CNN models with ResNet becomes roughly comparable to that of equivalent R-FCN models which use 300 proposals in both mAP and GPU speed. | 1611.10012#47 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 48 | 4.1.7 FLOPs analysis. Figure 7 plots the GPU time for each model combination. However, this is very platform dependent. Counting FLOPs (multiply-adds) gives us a platform independent measure of computation, which may or may not be linear with respect to actual running times due to a number of issues such as caching, I/O, hardware optimization etc,
Figures 8a and 8b plot the FLOP count against observed wallclock times on the GPU and CPU respectively. Inter- estingly, we observe in the GPU plot (Figure 8a) that each
11
Meta Architecture 800 @ Faster RCNN @ R-FCN @ ssD e @ Q @ 600 e 2 100 cis = = 8 al ry e oom Feature Extractor 200 â Sap a © _ Inception Resnet V2 oo 8 @ = Inception v2 @ = Inception V3 8 8 © MobileNet oO @ = Resnet 101 @ vGG oO 200 400 600 800 1000 GPU Time
Meta Architecture 800 @ = Faster RCNN m R-FCN @ ssD e @ Q e 600 e 2 | Ly Feature Extractor @ Inception Resnet V2 @ Inception v2 @ = Inception V3 © MobileNet @ = Resnet 101 @ vGG o 2000 4000 6000 8000 10000 12000 CPU Time | 1611.10012#48 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 49 | Meta Architecture Meta Architecture 800 @ Faster RCNN @ R-FCN @ ssD e 800 @ = Faster RCNN m R-FCN @ ssD e @ @ Q @ Q e 600 600 e e 2 100 = = 8 al | Ly ry e oom Feature Extractor Feature Extractor 200 â Sap a © _ Inception Resnet V2 @ Inception Resnet oo 8 @ = Inception v2 @ Inception v2 @ = Inception V3 @ = Inception V3 8 8 © MobileNet © MobileNet oO @ = Resnet 101 @ = Resnet 101 @ vGG @ vGG oO 200 400 600 800 1000 o 2000 4000 6000 8000 10000 GPU Time CPU Time (a) GPU. (b) CPU.
# (a) GPU.
(b) CPU.
Figure 8: FLOPS vs time.
# Memory (MB) for Resolution=300
10000
a Memory 8000 6000 4000 2000 Faster Faster Faster | Faster Faster RCNN SsD RCNN R-FCN ssD RCNN R-FCN ssD RCNN R-FCN ssD RCNN R-FCN ssD VGG MobileNet Inception V2 Resnet 101 Inception Resnet V2 | 1611.10012#49 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 50 | Figure 9: Memory (Mb) usage for each model. Note that we measure total memory usage rather than peak memory usage. Moreover, we include all data points corresponding to the low-resolution models here. The error bars reï¬ect variance in memory usage by using different numbers of proposals for the Faster R-CNN and R-FCN models (which leads to the seemingly considerable variance in the Faster-RCNN with Inception Resnet bar).
model has a different average ratio of ï¬ops to observed run- ning time in milliseconds. For denser block models such as Resnet 101, FLOPs/GPU time is typically greater than 1, perhaps due to efï¬ciency in caching. For Inception and Mo- bilenet models, this ratio is typically less than 1 â we con- jecture that this could be that factorization reduces FLOPs, but adds more overhead in memory I/O or potentially that current GPU instructions (cuDNN) are more optimized for dense convolution.
Figure 9 plots some of the same information in more detail, drilling down by meta-architecture and feature extractor se- lection. As with speed, Mobilenet is again the cheapest, re- quiring less than 1Gb (total) memory in almost all settings.
# 4.1.9 Good localization at .75 IOU means good local- ization at all IOU thresholds. | 1611.10012#50 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 51 | # 4.1.9 Good localization at .75 IOU means good local- ization at all IOU thresholds.
# 4.1.8 Memory analysis.
For memory benchmarking, we measure total usage rather than peak usage. Figures 10a, 10b plot memory usage against GPU and CPU wallclock times. Overall, we observe high correlation with running time with larger and more powerful feature extractors requiring much more memory.
While slicing the data by object size leads to interesting insights, it is also worth nothing that slicing data by IOU threshold does not give much additional information. Fig- ure 11 shows in fact that both [email protected] and [email protected] performances are almost perfectly linearly correlated with mAP@[.5:.95]. Thus detectors that have poor performance at the higher IOU thresholds always also show poor perfor- mance at the lower IOU thresholds. This being said, we also observe that [email protected] is slightly more tightly corre12
(a) GPU (b) CPU . . Figure 10: Memory (Mb) vs time.
20000 Meta Architecture @ FasterRCNN mm RFCN @ SSD 15000 e 10000 -- Memory (MB) Feature Extractor Inception Resnet V2 Inception V2 Inception V3. MobileNet Resnet 101 ves 600 700 800 900 | 1611.10012#51 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 52 | 20000 Meta Architecture @ FasterRCNN mm RFCN @ SSD CS) 15000 ° a = 2 10000 -- 5 £ 5 © Inception Resnet V2 5000 @ Inception v2 @ Inception V3 @ MobileNet @ Reset 101 @ vcG 0 0 2000 4000 6000 8000 10000 12000 CPU Time
lated with mAP@[.5:.95] (with R2 > .99), so if we were to replace the standard COCO metric with mAP at a single IOU threshold, we would likely choose IOU=.75.
COCO category for each model and declared two models to be too similar if their category-wise AP vectors had cosine distance greater than some threshold.
# 4.2. State-of-the-art detection on COCO | 1611.10012#52 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 53 | # 4.2. State-of-the-art detection on COCO
Finally, we brieï¬y describe how we ensembled some of our models to achieve the current state of the art perfor- mance on the 2016 COCO object detection challenge. Our model attains 41.3% mAP@[.5, .95] on the COCO test set and is an ensemble of ï¬ve Faster R-CNN models based on Resnet and Inception Resnet feature extractors. This outper- forms the previous best result (37.1% mAP@[.5, .95]) by MSRA, which used an ensemble of three Resnet-101 mod- els [13]. Table 4 summarizes the performance of our model and highlights how our model has improved on the state-of- the-art across all COCO metrics. Most notably, our model achieves a relative improvement of nearly 60% on small ob- ject recall over the previous best result. Even though this ensemble with state-of-the-art numbers could be viewed as an extreme point on the speed/accuracy tradeoff curves (re- quires â¼50 end-to-end network evaluations per image), we have chosen to present this model in isolation since it is not comparable to the âsingle modelâ results that we focused on in the rest of the paper. | 1611.10012#53 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 54 | To construct our ensemble, we selected a set of ï¬ve mod- els from our collection of Faster R-CNN models. Each of the models was based on Resnet and Inception Resnet fea- ture extractors with varying output stride conï¬gurations, re- trained using variations on the loss functions, and different random orderings of the training data. Models were se- lected greedily using their performance on a held-out val- idation set. However, in order to take advantage of models with complementary strengths, we also explicitly encour- age diversity by pruning away models that are too similar to previously selected models (c.f., [21]). To do this, we computed the vector of average precision results across each | 1611.10012#54 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.