doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1603.01025 | 39 | Farabet, Cl´ement, Martini, Berin, Akselrod, Polina, Talay, Selc¸uk, LeCun, Yann, and Culurciello, Eugenio. Hard- ware accelerated convolutional neural networks for syn- In Proceedings of 2010 IEEE thetic vision systems. International Symposium on Circuits and Systems (IS- CAS),, pp. 257â260. IEEE, 2010.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convolutional neural networks. In Pereira, F., Burges, C.J.C., Bottou, L., and Weinberger, K.Q. (eds.), Advances in Neural Informa- tion Processing Systems 25, pp. 1097â1105, 2012.
Lin, Zhouhan, Courbariaux, Matthieu, Memisevic, Roland, and Bengio, Yoshua. Neural networks with few multipli- cations. arXiv preprint arXiv:1510.03009, 2015. | 1603.01025#39 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 40 | Neelakantan, Arvind, Vilnis, Luke, Le, Quoc V., Sutskever, Ilya, Kaiser, Lukasz, and Karol Kurach, James Martens. Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807, 2015.
Novikov, Alexander, Podoprikhin, Dmitry, Osokin, Anton, and Vetrov, Dmitry. Tensorizing neural networks. In Advances in Neural Information Processing Systems 28 (NIPS2015), pp. 442â450, 2015.
Gautschi, Michael, Schaffner, Michael, Gurkaynak, Frank K., and Benini, Luca. A 65nm CMOS 6.4-to- 29.2pJ/FLOP at 0.8V shared logarithmic ï¬oating point unit for acceleration of nonlinear function kernels in In Proceedings of a tightly coupled processor cluster.
Shin, Sungho, Hwang, Kyuyeon, and Sung, Wonyong. Fixed point performance analysis of recurrent neural net- works. In Proceedings of The 41st IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP2016). IEEE, 2016.
Convolutional Neural Networks using Logarithmic Data Representation | 1603.01025#40 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 41 | Convolutional Neural Networks using Logarithmic Data Representation
Simonyan, Karen and Zisserman, Andrew. Very deep con- volutional networks for large-scale image recognition. arXiv preprint arXiv:11409.1556, 2014.
Sung, Wonyong, Shin, Sungho, and Hwang, Kyuyeon. Resiliency of deep neural networks under quantization. arXiv preprint arXiv:1511.06488, 2015.
Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Du- mitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. In CVPR 2015, 2015.
Tokui, Seiya, Oono, Kenta, Hido, Shohei, and Clayton, Justin. Chainer: a next-generation open source frame- In Proceedings of Workshop work for deep learning. on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS), 2015. | 1603.01025#41 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 42 | Vanhoucke, Vincent, Senior, Andrew, and Mao, Mark Z. Improving the speed of neural networks on cpus. In Pro- ceedings of Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011, 2011.
Zhang, Chen, Li, Peng, Sun, Guangyu, Guan, Yijin, Xiao, Bingjun, and Cong, Jason. Optimizing FPGA-based accelerator design for deep convolutional neural net- In Proceedings of 23rd International Sympo- works. sium on Field-Programmable Gate Arrays (FPGA2015), 2015. | 1603.01025#42 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1602.07868 | 0 | 6 1 0 2 n u J 4 ] G L . s c [
3 v 8 6 8 7 0 . 2 0 6 1 : v i X r a
# Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks
Tim Salimans OpenAI [email protected] Diederik P. Kingma OpenAI [email protected]
# Abstract
We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.
# Introduction | 1602.07868#0 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 1 | # Introduction
Recent successes in deep learning have shown that neural networks trained by ï¬rst-order gradient based optimization are capable of achieving amazing results in diverse domains like computer vision, speech recognition, and language modelling [5]. However, it is also well known that the practical success of ï¬rst-order gradient based optimization is highly dependent on the curvature of the objective that is optimized. If the condition number of the Hessian matrix of the objective at the optimum is low, the problem is said to exhibit pathological curvature, and ï¬rst-order gradient descent will have trouble making progress [18, 28]. The amount of curvature, and thus the success of our optimization, is not invariant to reparameterization [1]: there may be multiple equivalent ways of parameterizing the same model, some of which are much easier to optimize than others. Finding good ways of parameterizing neural networks is thus an important problem in deep learning. | 1602.07868#1 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 2 | While the architectures of neural networks differ widely across applications, they are typically mostly composed of conceptually simple computational building blocks sometimes called neurons: each such neuron computes a weighted sum over its inputs and adds a bias term, followed by the application of an elementwise nonlinear transformation. Improving the general optimizability of deep networks is a challenging task [4], but since many neural architectures share these basic building blocks, improving these building blocks improves the performance of a very wide range of model architectures and could thus be very useful.
Several authors have recently developed methods to improve the conditioning of the cost gradient for general neural network architectures. One approach is to explicitly left multiply the cost gradient with an approximate inverse of the Fisher information matrix, thereby obtaining an approximately whitened natural gradient. Such an approximate inverse can for example be obtained by using a Kronecker factored approximation to the Fisher matrix and inverting it (KFAC, [19]), by using an
approximate Cholesky factorization of the inverse Fisher matrix (FANG, [8]), or by whitening the input of each layer in the neural network (PRONG, [3]). | 1602.07868#2 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 3 | approximate Cholesky factorization of the inverse Fisher matrix (FANG, [8]), or by whitening the input of each layer in the neural network (PRONG, [3]).
Alternatively, we can use standard ï¬rst order gradient descent without preconditioning, but change the parameterization of our model to give gradients that are more like the whitened natural gradients of these methods. For example, Raiko et al. [23] propose to transform the outputs of each neuron to have zero output and zero slope on average. They show that this transformation approximately diagonalizes the Fisher information matrix, thereby whitening the gradient, and that this leads to improved optimization performance. Another approach in this direction is batch normalization [11], a method where the output of each neuron (before application of the nonlinearity) is normalized by the mean and standard deviation of the outputs calculated over the examples in the minibatch. This reduces covariate shift of the neuron outputs and the authors suggest it also brings the Fisher matrix closer to the identity matrix. | 1602.07868#3 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 4 | Following this second approach to approximate natural gradient optimization, we propose a simple but general method, called weight normalization, for improving the optimizability of the weights of neural network models. The method is inspired by batch normalization, but it is a deterministic method that does not share batch normalizationâs property of adding noise to the gradients. In addition, the overhead imposed by our method is lower: no additional memory is required and the additional computation is negligible. The method show encouraging results on a wide range of deep learning applications.
# 2 Weight Normalization
We consider standard artiï¬cial neural networks where the computation of each neuron consists in taking a weighted sum of input features, followed by an elementwise nonlinearity:
y = Ï(w · x + b), (1)
where w is a k-dimensional weight vector, b is a scalar bias term, x is a k-dimensional vector of input features, Ï(.) denotes an elementwise nonlinearity such as the rectiï¬er max(., 0), and y denotes the scalar output of the neuron. | 1602.07868#4 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 5 | After associating a loss function to one or more neuron outputs, such a neural network is commonly trained by stochastic gradient descent in the parameters w, b of each neuron. In an effort to speed up the convergence of this optimization procedure, we propose to reparameterize each weight vector w in terms of a parameter vector v and a scalar parameter g and to perform stochastic gradient descent with respect to those parameters instead. We do so by expressing the weight vectors in terms of the new parameters using
w = g ||v|| v (2)
where v is a k-dimensional vector, g is a scalar, and ||v|| denotes the Euclidean norm of v. This reparameterization has the effect of ï¬xing the Euclidean norm of the weight vector w: we now have ||w|| = g, independent of the parameters v. We therefore call this reparameterizaton weight normalization. | 1602.07868#5 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 6 | The idea of normalizing the weight vector has been proposed before (e.g. [27]) but earlier work typically still performed optimization in the w-parameterization, only applying the normalization after each step of stochastic gradient descent. This is fundamentally different from our approach: we propose to explicitly reparameterize the model and to perform stochastic gradient descent in the new parameters v, g directly. Doing so improves the conditioning of the gradient and leads to improved convergence of the optimization procedure: By decoupling the norm of the weight vector (g) from the direction of the weight vector (v/||v||), we speed up convergence of our stochastic gradient descent optimization, as we show experimentally in section 5.
Instead of working with g directly, we may also use an exponential parameterization for the scale, i.e. g = es, where s is a log-scale parameter to learn by stochastic gradient descent. Parameterizing the g parameter in the log-scale is more intuitive and more easily allows g to span a wide range of different magnitudes. Empirically, however, we did not ï¬nd this to be an advantage. In our experiments, the eventual test-set performance was not signiï¬cantly better or worse than the results with directly learning g in its original parameterization, and optimization was slightly slower.
2
# 2.1 Gradients | 1602.07868#6 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 7 | 2
# 2.1 Gradients
Training a neural network in the new parameterization is done using standard stochastic gradient descent methods. Here we differentiate through (2) to obtain the gradient of a loss function L with respect to the new parameters v, g. Doing so gives
âgL = âwL · v ||v|| , âvL = g ||v|| âwL â gâgL ||v||2 v, (3)
where âwL is the gradient with respect to the weights w as used normally.
Backpropagation using weight normalization thus only requires a minor modiï¬cation to the usual backpropagation equations, and is easily implemented using standard neural network software. We provide reference implementations for Theano at https://github.com/TimSalimans/weight_ norm. Unlike with batch normalization, the expressions above are independent of the minibatch size and thus cause only minimal computational overhead.
An alternative way to write the gradient is
2 Vw, with My =I-~~., (4) Ilv|| ||w|| VVL= | 1602.07868#7 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 8 | An alternative way to write the gradient is
2 Vw, with My =I-~~., (4) Ilv|| ||w|| VVL=
where Mw is a projection matrix that projects onto the complement of the w vector. This shows that weight normalization accomplishes two things: it scales the weight gradient by g/||v||, and it projects the gradient away from the current weight vector. Both effects help to bring the covariance matrix of the gradient closer to identity and beneï¬t optimization, as we explain below. | 1602.07868#8 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 9 | Due to projecting away from w, the norm of v grows monotonically with the number of weight updates when learning a neural network with weight normalization using standard gradient descent without momentum: Let vâ = v + Av denote our parameter update, with Av x V,L (steepest ascent/descent), then Av is necessarily orthogonal to the current weight vector w since we project away from it when calculating Vy L (equation|4). Since v is proportional to w, the update is thus also orthogonal to v and increases its norm by the Pythagorean theorem. Specifically, if || Av||/||v|| = ¢ the new weight vector will have norm |{vâ|| = /|v|[? + ¢?||v||? = V1 + c||v|| = ||v||. The rate of increase will depend on the the variance of the weight gradient. If our gradients are noisy, c will be high and the norm of v will quickly increase, which in turn will decrease the scaling factor g/||v||. If the norm of the gradients is small, we get V1 + c? ~ 1, and the norm of v will stop increasing. Using this mechanism, the scaled gradient self-stabilizes its norm. This property | 1602.07868#9 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 11 | Empirically, we ï¬nd that the ability to grow the norm ||v|| makes optimization of neural networks with weight normalization very robust to the value of the learning rate: If the learning rate is too large, the norm of the unnormalized weights grows quickly until an appropriate effective learning rate is reached. Once the norm of the weights has grown large with respect to the norm of the updates, the effective learning rate stabilizes. Neural networks with weight normalization therefore work well with a much wider range of learning rates than when using the normal parameterization. It has been observed that neural networks with batch normalization also have this property [11], which can also be explained by this analysis.
By projecting the gradient away from the weight vector w, we also eliminate the noise in that direction. If the covariance matrix of the gradient with respect to w is given by C, the covariance matrix of the gradient in v is given by D = (g2/||v||2)MwCMw. Empirically, we ï¬nd that w is often (close to) a dominant eigenvector of the covariance matrix C: removing that eigenvector then gives a new covariance matrix D that is closer to the identity matrix, which may further speed up learning.
# 2.2 Relation to batch normalization | 1602.07868#11 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 12 | # 2.2 Relation to batch normalization
An important source of inspiration for this reparameterization is batch normalization [11], which normalizes the statistics of the pre-activation t for each minibatch as
t=
3
with µ[t], Ï[t] the mean and standard deviation of the pre-activations t = v · x. For the special case where our network only has a single layer, and the input features x for that layer are whitened (independently distributed with zero mean and unit variance), these statistics are given by µ[t] = 0 and Ï[t] = ||v||. In that case, normalizing the pre-activations using batch normalization is equivalent to normalizing the weights using weight normalization. | 1602.07868#12 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 13 | Convolutional neural networks usually have much fewer weights than pre-activations, so normalizing the weights is often much cheaper computationally. In addition, the norm of v is non-stochastic, while the minibatch mean µ[t] and variance Ï2[t] can in general have high variance for small minibatch size. Weight normalization can thus be viewed as a cheaper and less noisy approximation to batch normalization. Although exact equivalence does not usually hold for deeper architectures, we still ï¬nd that our weight normalization method provides much of the speed-up of full batch normalization. In addition, its deterministic nature and independence on the minibatch input also means that our method can be applied more easily to models like RNNs and LSTMs, as well as noise-sensitive applications like reinforcement learning.
# 3 Data-Dependent Initialization of Parameters | 1602.07868#13 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 14 | # 3 Data-Dependent Initialization of Parameters
Besides a reparameterization effect, batch normalization also has the beneï¬t of ï¬xing the scale of the features generated by each layer of the neural network. This makes the optimization robust against parameter initializations for which these scales vary across layers. Since weight normalization lacks this property, we ï¬nd it is important to properly initialize our parameters. We propose to sample the elements of v from a simple distribution with a ï¬xed scale, which is in our experiments a normal distribution with mean zero and standard deviation 0.05. Before starting training, we then initialize the b and g parameters to ï¬x the minibatch statistics of all pre-activations in our network, just like in batch normalization, but only for a single minibatch of data and only during initialization. This can be done efï¬ciently by performing an initial feedforward pass through our network for a single minibatch of data X, using the following computation at each neuron:
vex _ (tâHft) t= and y=0(â44 ), (5)
where µ[t] and Ï[t] are the mean and standard deviation of the pre-activation t over the examples in the minibatch. We can then initialize the neuronâs biase b and scale g as | 1602.07868#14 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 15 | g â 1 Ï[t] , b â âµ[t] Ï[t] , (6)
so that y = Ï(w · x + b). Like batch normalization, this method ensures that all features initially have zero mean and unit variance before application of the nonlinearity. With our method this only holds for the minibatch we use for initialization, and subsequent minibatches may have slightly different statistics, but experimentally we ï¬nd this initialization method to work well. The method can also be applied to networks without weight normalization, simply by doing stochastic gradient optimization on the parameters w directly, after initialization in terms of v and g: this is what we compare to in section 5. Independently from our work, this type of initialization was recently proposed by different authors [20, 14] who found such data-based initialization to work well for use with the standard parameterization in terms of w.
The downside of this initialization method is that it can only be applied in similar cases as where batch normalization is applicable. For models with recursion, such as RNNs and LSTMs, we will have to resort to standard initialization methods.
# 4 Mean-only Batch Normalization | 1602.07868#15 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 16 | # 4 Mean-only Batch Normalization
Weight normalization, as introduced in section 2, makes the scale of neuron activations approximately independent of the parameters v. Unlike with batch normalization, however, the means of the neuron activations still depend on v. We therefore also explore the idea of combining weight normalization with a special version of batch normalization, which we call mean-only batch normalization: With this normalization method, we subtract out the minibatch means like with full batch normalization,
4
but we do not divide by the minibatch standard deviations. That is, we compute neuron activations using
Ët = t â µ[t] + b, (7) where w is the weight vector, parameterized using weight normalization, and µ[t] is the minibatch mean of the pre-activation t. During training, we keep a running average of the minibatch mean which we substitute in for µ[t] at test time.
The gradient of the loss with respect to the pre-activation t is calculated as | 1602.07868#16 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 17 | The gradient of the loss with respect to the pre-activation t is calculated as
âtL = âËtL â µ[âËtL], where µ[.] denotes once again the operation of taking the minibatch mean. Mean-only batch normal- ization thus has the effect of centering the gradients that are backpropagated. This is a comparatively cheap operation, and the computational overhead of mean-only batch normalization is thus lower than for full batch normalization. In addition, this method causes less noise during training, and the noise that is caused is more gentle as the law of large numbers ensures that µ[t] and µ[âËt] are approximately normally distributed. Thus, the added noise has much lighter tails than the highly kurtotic noise caused by the minibatch estimate of the variance used in full batch normalization. As we show in section 5.1, this leads to improved accuracy at test time.
# 5 Experiments
We experimentally validate the usefulness of our method using four different models for varied applications in supervised image recognition, generative modelling, and deep reinforcement learning.
# 5.1 Supervised Classiï¬cation: CIFAR-10 | 1602.07868#17 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 18 | # 5.1 Supervised Classiï¬cation: CIFAR-10
To test our reparameterization method for the application of supervised classiï¬cation, we consider the CIFAR-10 data set of natural images [15]. The model we are using is based on the ConvPool-CNN-C architecture of [26], with some small modiï¬cations: we replace the ï¬rst dropout layer by a layer that adds Gaussian noise, we expand the last hidden layer from 10 units to 192 units, and we use 2 à 2 max-pooling, rather than 3 à 3. The only hyperparameter that we actively optimized (the standard deviation of the Gaussian noise) was chosen to maximize the performance of the network on a holdout set of 10000 examples, using the standard parameterization (no weight normalization or batch normalization). A full description of the resulting architecture is given in table A in the supplementary material. | 1602.07868#18 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 19 | We train our network for CIFAR-10 using Adam [12] for 200 epochs, with a ï¬xed learning rate and momentum of 0.9 for the ï¬rst 100 epochs. For the last 100 epochs we set the momentum to 0.5 and linearly decay the learning rate to zero. We use a minibatch size of 100. We evaluate 5 different parameterizations of the network: 1) the standard parameterization, 2) using batch normalization, 3) using weight normalization, 4) using weight normalization combined with mean-only batch nor- malization, 5) using mean-only batch normalization with the normal parameterization. The network parameters are initialized using the scheme of section 3 such that all four cases have identical param- eters starting out. For each case we pick the optimal learning rate in {0.0003, 0.001, 0.003, 0.01}. The resulting error curves during training can be found in ï¬gure 1: both weight normalization and batch normalization provide a signiï¬cant speed-up over the standard parameterization. Batch normalization makes slightly more progress per epoch than weight normalization early on, although this is partly offset by the higher computational cost: with our implementation, training with batch | 1602.07868#19 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 20 | normalization makes slightly more progress per epoch than weight normalization early on, although this is partly offset by the higher computational cost: with our implementation, training with batch normalization was about 16% slower compared to the standard parameterization. In contrast, weight normalization was not noticeably slower. During the later stage of training, weight normalization and batch normalization seem to optimize at about the same speed, with the normal parameterization (with or without mean-only batch normalization) still lagging behind. | 1602.07868#20 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 21 | After optimizing the network for 200 epochs using the different parameterizations, we evaluate their performance on the CIFAR-10 test set. The results are summarized in table 2: weight normalization, the normal parameterization, and mean-only batch normalization have similar test accuracy (â 8.5% error). Batch normalization does signiï¬cantly better at 8.05% error. Mean-only batch normalization combined with weight normalization has the best performance at 7.31% test error, and interestingly does much better than mean-only batch normalization combined with the normal parameterization: This suggests that the noise added by batch normalization can be useful for regularizing the network,
5
normal param. . weight norm. go Wn + mean-only BN 2 mean-only BN id 0.05 % 50 100 150 200 training epochs | 1602.07868#21 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 22 | 5
normal param. . weight norm. go Wn + mean-only BN 2 mean-only BN id 0.05 % 50 100 150 200 training epochs
Model Maxout [6] Network in Network [17] Deeply Supervised [16] ConvPool-CNN-C [26] ALL-CNN-C [26] our CNN, mean-only B.N. our CNN, weight norm. our CNN, normal param. our CNN, batch norm. ours, W.N. + mean-only B.N. Test Error 11.68% 10.41% 9.6% 9.31% 9.08% 8.52% 8.46% 8.43% 8.05% 7.31%
Figure 1: Training error for CIFAR-10 using differ- ent network parameterizations. For weight normal- ization, batch normalization, and mean-only batch normalization we show results using Adam with a learning rate of 0.003. For the normal parameteri- zation we instead use 0.0003 which works best in this case. For the last 100 epochs the learning rate is linearly decayed to zero.
Figure 2: Classiï¬cation results on CIFAR-10 without data augmentation. | 1602.07868#22 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 23 | Figure 2: Classiï¬cation results on CIFAR-10 without data augmentation.
but that the reparameterization provided by weight normalization or full batch normalization is also needed for optimal results. We hypothesize that the substantial improvement by mean-only B.N. with weight normalization over regular batch normalization is due to the distribution of the noise caused by the normalization method during training: for mean-only batch normalization the minibatch mean has a distribution that is approximately Gaussian, while the noise added by full batch normalization during training has much higher kurtosis. As far as we are aware, the result with mean-only batch normalization combined with weight normalization represents the state-of-the-art for CIFAR-10 among methods that do not use data augmentation.
# 5.2 Generative Modelling: Convolutional VAE
Next, we test the effect of weight normalization applied to deep convolutional variational auto- encoders (CVAEs) [13, 24, 25], trained on the MNIST data set of images of handwritten digits and the CIFAR-10 data set of small natural images. | 1602.07868#23 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 24 | Variational auto-encoders are generative models that explain the data vector x as arising from a set of latent variables z, through a joint distribution of the form p(z, x) = p(z)p(x|z), where the decoder p(x|z) is speciï¬ed using a neural network. A lower bound on the log marginal likelihood log p(x) can be obtained by approximately inferring the latent variables z from the observed data x using an encoder distribution q(z|x) that is also speciï¬ed as a neural network. This lower bound is then optimized to ï¬t the model to the data. | 1602.07868#24 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 25 | We follow a similar implementation of the CVAE as in [25] with some modiï¬cations, mainly that the encoder and decoder are parameterized with ResNet [9] blocks, and that the diagonal posterior is replaced with auto-regressive variational inference1. For MNIST, the encoder consists of 3 sequences of two ResNet blocks each, the ï¬rst sequence acting on 16 feature maps, the others on 32 feature maps. The ï¬rst two sequences are followed by a 2-times subsampling operation implemented using 2 à 2 stride, while the third sequence is followed by a fully connected layer with 450 units. The decoder has a similar architecture, but with reversed direction. For CIFAR-10, we used a neural architecture with ResNet units and multiple intermediate stochastic layers1. We used Adamax [12] with α = 0.002 for optimization, in combination with Polyak averaging [22] in the form of an exponential moving average that averages parameters over approximately 10 epochs.
In ï¬gure 3, we plot the test-set lower bound as a function of number of training epochs, including error bars based on multiple different random seeds for initializing parameters. As can be seen, the parameterization with weight normalization has lower variance and converges to a better optimum. We observe similar results across different hyper-parameter settings. | 1602.07868#25 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 26 | # 1Manuscript in preparation
6
8
#
&
# E 5
8
Convolutional VAE on MNIST Convolutional VAE on CIFAR-10 84.0) 10000 84.5] -85.0 8 9500 -85.5) & -86.0 © 9000 E 86.5 s -87.0) B 8500 87.5 T+ normal parameterization â normal parameterization 41 weight normalization â _ weight normalization 88.0) 8000 30 100 150 200 250 300 0 50 100 150 200 250 300 350 400 450 training epochs training epochs
Figure 3: Marginal log likelihood lower bound on the MNIST (top) and CIFAR-10 (bottom) test sets for a convolutional VAE during training, for both the standard implementation as well as our modiï¬cation with weight normalization. For MNIST, we provide standard error bars to indicate variance based on different initial random seeds.
# 5.3 Generative Modelling: DRAW | 1602.07868#26 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 27 | # 5.3 Generative Modelling: DRAW
Next, we consider DRAW, a recurrent generative model by [7]. DRAW is a variational auto-encoder with generative model p(z)p(x|z) and encoder q(z|x), similar to the model in section 5.2, but with both the encoder and decoder consisting of a recurrent neural network comprised of Long Short-Term Memory (LSTM) [10] units. LSTM units consist of a memory cell with additive dynamics, combined with input, forget, and output gates that determine which information ï¬ows in and out of the memory. The additive dynamics enables learning of long-range dependencies in the data.
At each time step of the model, DRAW uses the same set of weight vectors to update the cell states of the LSTM units in its encoder and decoder. Because of the recurrent nature of this process it is not clear how batch normalization could be applied to this model: Normalizing the cell states diminishes their ability to pass through information. Fortunately, weight normalization can be applied trivially to the weight vectors of each LSTM unit, and we ï¬nd this to work well empirically. | 1602.07868#27 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 28 | We take the Theano implementation of DRAW provided at https://github.com/jbornschein/ draw and use it to model the MNIST data set of handwritten digits. We then make a single modiï¬ca- tion to the model: we apply weight normalization to all weight vectors. As can be seen in ï¬gure 4, this signiï¬cantly speeds up convergence of the optimization procedure, even without modifying the initialization method and learning rate that were tuned for use with the normal parameterization.
bound on marginal log likelihood normal parameterization weight normalization 10 20 30 «40 «50 60 70 8 90 100 training epochs
Figure 4: Marginal log likelihood lower bound on the MNIST test set for DRAW during training, for both the standard implementation as well as our modiï¬cation with weight normalization. 100 epochs is not sufï¬cient for convergence for this model, but the implementation using weight normalization clearly makes progress much more quickly than with the standard parameterization.
7
# 5.4 Reinforcement Learning: DQN | 1602.07868#28 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 29 | Next we apply weight normalization to the problem of Reinforcement Learning for playing games on the Atari Learning Environment [2]. The approach we use is the Deep Q-Network (DQN) proposed by [21]. This is an application for which batch normalization is not well suited: the noise introduced by estimating the minibatch statistics destabilizes the learning process. We were not able to get batch normalization to work for DQN without using an impractically large minibatch size. In contrast, weight normalization is easy to apply in this context, as is the initialization method of section 3. Stochastic gradient learning is performed using Adamax [12] with momentum of 0.5. We search for optimal learning rates in {0.0001, 0.0003, 0.001, 0.003}, generally ï¬nding 0.0003 to work well with weight normalization and 0.0001 to work well for the normal parameterization. We also use a larger minibatch size (64) which we found to be more efï¬cient on our hardware (Amazon Elastic Compute Cloud g2.2xlarge GPU instance). Apart from these changes we follow [21] as closely as possible in terms of parameter settings and evaluation methods. However, we use a | 1602.07868#29 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 31 | Figure 5 shows the training curves obtained using DQN with the standard parameterization and with weight normalization on Space Invaders. Using weight normalization the algorithm progresses more quickly and reaches a better ï¬nal result. Table 6 shows the ï¬nal evaluation scores obtained by DQN with weight normalization for four games: on average weight normalization improves the performance of DQN.
: 5 g eight rermatzaton or os on io 30 training epochs 500
Figure 5: Evaluation scores for Space In- vaders obtained by DQN after each epoch of training, for both the standard parameteriza- tion and using weight normalization. Learn- ing rates for both cases were selected to max- imize the highest achieved test score.
Game Breakout Enduro Seaquest Space Invaders normal weightnorm Mnih 410 1,250 7,188 1,779 403 1,448 7,375 2,179 401 302 5,286 1,975 | 1602.07868#31 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 32 | Figure 6: Maximum evaluation scores obtained by DQN, using either the normal parameterization or using weight normalization. The scores indicated by Mnih et al. are those reported by [21]: Our normal parameterization is approximately equivalent to their method. Differences in scores may be caused by small differences in our implementation. Speciï¬cally, the difference in our score on Enduro and that reported by [21] might be due to us not using a play-time limit during evaluation.
# 6 Conclusion
We have presented weight normalization, a simple reparameterization of the weight vectors in a neural network that accelerates the convergence of stochastic gradient descent optimization. Weight normalization was applied to four different models in supervised image recognition, generative modelling, and deep reinforcement learning, showing a consistent advantage across applications. The reparameterization method is easy to apply, has low computational overhead, and does not introduce dependencies between the examples in a minibatch, making it our default choice in the development of new deep learning architectures.
# Acknowledgments
We thank John Schulman for helpful comments on an earlier draft of this paper.
8
# References
[1] S. Amari. Neural learning in structured parameter spaces - natural Riemannian gradient. In Advances in Neural Information Processing Systems, pages 127â133. MIT Press, 1997. | 1602.07868#32 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 33 | [2] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 47:253â279, 06 2013.
[3] G. Desjardins, K. Simonyan, R. Pascanu, et al. Natural neural networks. In Advances in Neural Information Processing Systems, pages 2062â2070, 2015.
[4] X. Glorot and Y. Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In International conference on artiï¬cial intelligence and statistics, pages 249â256, 2010.
[5] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. Book in preparation for MIT Press, 2016.
[6] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In ICML, 2013.
[7] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015. | 1602.07868#33 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 34 | [8] R. Grosse and R. Salakhudinov. Scaling up natural gradient by sparsely factorizing the inverse ï¬sher matrix. In ICML, pages 2304â2313, 2015.
[9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
[10] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
[11] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
[12] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[13] D. P. Kingma and M. Welling. Auto-Encoding Variational Bayes. Proceedings of the 2nd International Conference on Learning Representations, 2013. | 1602.07868#34 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 35 | [13] D. P. Kingma and M. Welling. Auto-Encoding Variational Bayes. Proceedings of the 2nd International Conference on Learning Representations, 2013.
[14] P. Krähenbühl, C. Doersch, J. Donahue, and T. Darrell. Data-dependent initializations of convolutional neural networks. arXiv preprint arXiv:1511.06856, 2015.
[15] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images, 2009.
[16] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. In Deep Learning and Representation Learning Workshop, NIPS, 2014.
[17] M. Lin, C. Qiang, and S. Yan. Network in network. In ICLR: Conference Track, 2014.
[18] J. Martens. Deep learning via hessian-free optimization. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 735â742, 2010.
[19] J. Martens and R. Grosse. Optimizing neural networks with kronecker-factored approximate curvature. arXiv preprint arXiv:1503.05671, 2015. | 1602.07868#35 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 36 | [20] D. Mishkin and J. Matas. All you need is a good init. arXiv preprint arXiv:1511.06422, 2015.
[21] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[22] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838â855, 1992.
[23] T. Raiko, H. Valpola, and Y. LeCun. Deep learning made easier by linear transformations in perceptrons. In International Conference on Artiï¬cial Intelligence and Statistics, pages 924â932, 2012.
[24] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, pages 1278â1286, 2014. | 1602.07868#36 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 37 | [25] T. Salimans, D. P. Kingma, and M. Welling. Markov chain Monte Carlo and variational inference: Bridging the gap. In ICML, 2015.
9
[26] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolu- tional net. In ICLR Workshop Track, 2015.
[27] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In Proceedings of the 18th Annual Conference on Learning Theory, pages 545â-560, 2005.
[28] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In ICML, pages 1139â1147, 2013.
10
# A Neural network architecure for CIFAR-10 experiments | 1602.07868#37 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07868 | 38 | 10
# A Neural network architecure for CIFAR-10 experiments
Layer type raw RGB input ZCA whitening Gaussian noise Ï = 0.15 3 Ã 3 conv leaky ReLU 3 Ã 3 conv leaky ReLU 3 Ã 3 conv leaky ReLU 2 Ã 2 max pool, str. 2 dropout with p = 0.5 3 Ã 3 conv leaky ReLU 3 Ã 3 conv leaky ReLU 3 Ã 3 conv leaky ReLU 2 Ã 2 max pool, str. 2 dropout with p = 0.5 3 Ã 3 conv leaky ReLU 1 Ã 1 conv leaky ReLU 1 Ã 1 conv leaky ReLU global average pool softmax output # channels 3 3 3 96 96 96 96 96 192 192 192 192 192 192 192 192 192 10
x, y dimension 32 32 32 32 32 32 16 16 16 16 16 8 8 6 6 6 1 1
Table 1: Neural network architecture for CIFAR-10.
11 | 1602.07868#38 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | [
{
"id": "1512.03385"
},
{
"id": "1502.04623"
},
{
"id": "1503.05671"
},
{
"id": "1511.06856"
},
{
"id": "1511.06422"
}
] |
1602.07360 | 0 | 6 1 0 2
v o N 4 ] V C . s c [
4 v 0 6 3 7 0 . 2 0 6 1 : v i X r a
# Under review as a conference paper at ICLR 2017
SQUEEZENET: ALEXNET-LEVEL ACCURACY WITH 50X FEWER PARAMETERS AND <0.5MB MODEL SIZE
Forrest N. Iandola1, Song Han2, Matthew W. Moskewicz1, Khalid Ashraf1, William J. Dally2, Kurt Keutzer1 1DeepScaleâ & UC Berkeley {forresti, moskewcz, kashraf, keutzer}@eecs.berkeley.edu {songhan, dally}@stanford.edu
# ABSTRACT | 1602.07360#0 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 1 | # ABSTRACT
Recent research on deep convolutional neural networks (CNNs) has focused pri- marily on improving accuracy. For a given accuracy level, it is typically possi- ble to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed train- ing. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FP- GAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510Ã smaller than AlexNet). The https://github.com/DeepScale/SqueezeNet
1 Much of the recent research on deep convolutional neural networks (CNNs) has focused on increas- ing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accuracy, a CNN architecture with fewer parameters has several advantages: | 1602.07360#1 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 2 | More efï¬cient distributed training. Communication among servers is the limiting factor to the scalability of distributed CNN training. For distributed data-parallel training, com- munication overhead is directly proportional to the number of parameters in the model (Ian- dola et al., 2016). In short, small models train faster due to requiring less communication. ⢠Less overhead when exporting new models to clients. For autonomous driving, compa- nies such as Tesla periodically copy new models from their servers to customersâ cars. This practice is often referred to as an over-the-air update. Consumer Reports has found that the safety of Teslaâs Autopilot semi-autonomous driving functionality has incrementally improved with recent over-the-air updates (Consumer Reports, 2016). However, over-the- air updates of todayâs typical CNN/DNN models can require large data transfers. With AlexNet, this would require 240MB of communication from the server to the car. Smaller models require less communication, making frequent updates more feasible. | 1602.07360#2 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 3 | ⢠Feasible FPGA and embedded deployment. FPGAs often have less than 10MB1 of on- chip memory and no off-chip memory or storage. For inference, a sufï¬ciently small model could be stored directly on the FPGA instead of being bottlenecked by memory band- width (Qiu et al., 2016), while video frames stream through the FPGA in real time. Further, when deploying CNNs on Application-Speciï¬c Integrated Circuits (ASICs), a sufï¬ciently small model could be stored directly on-chip, and smaller models may enable the ASIC to ï¬t on a smaller die.
âhttp://deepscale.ai 1For example, the Xilinx Vertex-7 FPGA has a maximum of 8.5 MBytes (i.e. 68 Mbits) of on-chip memory
and does not provide off-chip memory.
1
# Under review as a conference paper at ICLR 2017
As you can see, there are several advantages of smaller CNN architectures. With this in mind, we focus directly on the problem of identifying a CNN architecture with fewer parameters but equivalent accuracy compared to a well-known model. We have discovered such an architecture, which we call SqueezeNet. In addition, we present our attempt at a more disciplined approach to searching the design space for novel CNN architectures. | 1602.07360#3 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 4 | The rest of the paper is organized as follows. In Section 2 we review the related work. Then, in Sections 3 and 4 we describe and evaluate the SqueezeNet architecture. After that, we turn our attention to understanding how CNN architectural design choices impact model size and accuracy. In We gain this understanding by exploring the design space of SqueezeNet-like architectures. Section 5, we do design space exploration on the CNN microarchitecture, which we deï¬ne as the organization and dimensionality of individual layers and modules. In Section 6, we do design space exploration on the CNN macroarchitecture, which we deï¬ne as high-level organization of layers in a CNN. Finally, we conclude in Section 7. In short, Sections 3 and 4 are useful for CNN researchers as well as practitioners who simply want to apply SqueezeNet to a new application. The remaining sections are aimed at advanced researchers who intend to design their own CNN architectures. | 1602.07360#4 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 5 | 2 RELATED WORK 2.1 MODEL COMPRESSION The overarching goal of our work is to identify a model that has very few parameters while preserv- ing accuracy. To address this problem, a sensible approach is to take an existing CNN model and compress it in a lossy fashion. In fact, a research community has emerged around the topic of model compression, and several approaches have been reported. A fairly straightforward approach by Den- ton et al. is to apply singular value decomposition (SVD) to a pretrained CNN model (Denton et al., 2014). Han et al. developed Network Pruning, which begins with a pretrained model, then replaces parameters that are below a certain threshold with zeros to form a sparse matrix, and ï¬nally performs a few iterations of training on the sparse CNN (Han et al., 2015b). Recently, Han et al. extended their work by combining Network Pruning with quantization (to 8 bits or less) and huffman encoding to create an approach called Deep Compression (Han et al., 2015a), and further designed a hardware accelerator called EIE (Han et al., 2016a) that operates directly on the compressed model, achieving substantial speedups and energy savings. | 1602.07360#5 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 6 | 2.2 CNN MICROARCHITECTURE Convolutions have been used in artiï¬cial neural networks for at least 25 years; LeCun et al. helped to popularize CNNs for digit recognition applications in the late 1980s (LeCun et al., 1989). In neural networks, convolution ï¬lters are typically 3D, with height, width, and channels as the key dimensions. When applied to images, CNN ï¬lters typically have 3 channels in their ï¬rst layer (i.e. RGB), and in each subsequent layer Li the ï¬lters have the same number of channels as Liâ1 has ï¬lters. The early work by LeCun et al. (LeCun et al., 1989) uses 5x5xChannels2 ï¬lters, and the recent VGG (Simonyan & Zisserman, 2014) architectures extensively use 3x3 ï¬lters. Models such as Network-in-Network (Lin et al., 2013) and the GoogLeNet family of architectures (Szegedy et al., 2014; Ioffe & Szegedy, 2015; Szegedy et al., 2015; 2016) use 1x1 ï¬lters in some layers. | 1602.07360#6 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 7 | With the trend of designing very deep CNNs, it becomes cumbersome to manually select ï¬lter di- mensions for each layer. To address this, various higher level building blocks, or modules, comprised of multiple convolution layers with a speciï¬c ï¬xed organization have been proposed. For example, the GoogLeNet papers propose Inception modules, which are comprised of a number of different di- mensionalities of ï¬lters, usually including 1x1 and 3x3, plus sometimes 5x5 (Szegedy et al., 2014) and sometimes 1x3 and 3x1 (Szegedy et al., 2015). Many such modules are then combined, perhaps with additional ad-hoc layers, to form a complete network. We use the term CNN microarchitecture to refer to the particular organization and dimensions of the individual modules.
2.3 CNN MACROARCHITECTURE While the CNN microarchitecture refers to individual layers and modules, we deï¬ne the CNN macroarchitecture as the system-level organization of multiple modules into an end-to-end CNN architecture.
2From now on, we will simply abbreviate HxWxChannels to HxW.
2
# Under review as a conference paper at ICLR 2017 | 1602.07360#7 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 8 | 2From now on, we will simply abbreviate HxWxChannels to HxW.
2
# Under review as a conference paper at ICLR 2017
Perhaps the mostly widely studied CNN macroarchitecture topic in the recent literature is the impact of depth (i.e. number of layers) in networks. Simoyan and Zisserman proposed the VGG (Simonyan & Zisserman, 2014) family of CNNs with 12 to 19 layers and reported that deeper networks produce higher accuracy on the ImageNet-1k dataset (Deng et al., 2009). K. He et al. proposed deeper CNNs with up to 30 layers that deliver even higher ImageNet accuracy (He et al., 2015a).
The choice of connections across multiple layers or modules is an emerging area of CNN macroar- chitectural research. Residual Networks (ResNet) (He et al., 2015b) and Highway Networks (Sri- vastava et al., 2015) each propose the use of connections that skip over multiple layers, for example additively connecting the activations from layer 3 to the activations from layer 6. We refer to these connections as bypass connections. The authors of ResNet provide an A/B comparison of a 34-layer CNN with and without bypass connections; adding bypass connections delivers a 2 percentage-point improvement on Top-5 ImageNet accuracy. | 1602.07360#8 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 9 | 2.4 NEURAL NETWORK DESIGN SPACE EXPLORATION Neural networks (including deep and convolutional NNs) have a large design space, with numerous options for microarchitectures, macroarchitectures, solvers, and other hyperparameters. It seems natural that the community would want to gain intuition about how these factors impact a NNâs accuracy (i.e. the shape of the design space). Much of the work on design space exploration (DSE) of NNs has focused on developing automated approaches for ï¬nding NN architectures that deliver higher accuracy. These automated DSE approaches include bayesian optimization (Snoek et al., 2012), simulated annealing (Ludermir et al., 2006), randomized search (Bergstra & Bengio, 2012), and genetic algorithms (Stanley & Miikkulainen, 2002). To their credit, each of these papers pro- vides a case in which the proposed DSE approach produces a NN architecture that achieves higher accuracy compared to a representative baseline. However, these papers make no attempt to provide intuition about the shape of the NN design space. Later in this paper, we eschew automated ap- proaches â instead, we refactor CNNs in such a way that we can do principled A/B comparisons to investigate how CNN architectural decisions inï¬uence model size and accuracy. | 1602.07360#9 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 10 | In the following sections, we ï¬rst propose and evaluate the SqueezeNet architecture with and with- out model compression. Then, we explore the impact of design choices in microarchitecture and macroarchitecture for SqueezeNet-like CNN architectures.
3 SQUEEZENET: PRESERVING ACCURACY WITH FEW PARAMETERS In this section, we begin by outlining our design strategies for CNN architectures with few param- eters. Then, we introduce the Fire module, our new building block out of which to build CNN architectures. Finally, we use our design strategies to construct SqueezeNet, which is comprised mainly of Fire modules.
3.1 ARCHITECTURAL DESIGN STRATEGIES Our overarching objective in this paper is to identify CNN architectures that have few parameters while maintaining competitive accuracy. To achieve this, we employ three main strategies when designing CNN architectures:
Strategy 1. Replace 3x3 ï¬lters with 1x1 ï¬lters. Given a budget of a certain number of convolution ï¬lters, we will choose to make the majority of these ï¬lters 1x1, since a 1x1 ï¬lter has 9X fewer parameters than a 3x3 ï¬lter. | 1602.07360#10 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 11 | Strategy 2. Decrease the number of input channels to 3x3 ï¬lters. Consider a convolution layer that is comprised entirely of 3x3 ï¬lters. The total quantity of parameters in this layer is (number of input channels) * (number of ï¬lters) * (3*3). So, to maintain a small total number of parameters in a CNN, it is important not only to decrease the number of 3x3 ï¬lters (see Strategy 1 above), but also to decrease the number of input channels to the 3x3 ï¬lters. We decrease the number of input channels to 3x3 ï¬lters using squeeze layers, which we describe in the next section.
Strategy 3. Downsample late in the network so that convolution layers have large activation maps. In a convolutional network, each convolution layer produces an output activation map with a spatial resolution that is at least 1x1 and often much larger than 1x1. The height and width of these activation maps are controlled by: (1) the size of the input data (e.g. 256x256 images) and (2)
3 | 1602.07360#11 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 12 | 3
conference paper at ICLR 2017 saves 1x1 convolution filters RelU. expane 1x1 and 3x3 convolution filters = ved fo» 0008 09008f 090) fe) Ate) Ale) Ale) Boa Iedee) Aptede) Aetere) 0907 oa0F% \jao0% 999
# Under review as a conference paper at ICLR 2017
Figure 1: Microarchitectural view: Organization of convolution ï¬lters in the Fire module. In this example, s1x1 = 3, e1x1 = 4, and e3x3 = 4. We illustrate the convolution ï¬lters but not the activations. | 1602.07360#12 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 13 | the choice of layers in which to downsample in the CNN architecture. Most commonly, downsam- pling is engineered into CNN architectures by setting the (stride > 1) in some of the convolution or pooling layers (e.g. (Szegedy et al., 2014; Simonyan & Zisserman, 2014; Krizhevsky et al., 2012)). If early3 layers in the network have large strides, then most layers will have small activation maps. Conversely, if most layers in the network have a stride of 1, and the strides greater than 1 are con- centrated toward the end4 of the network, then many layers in the network will have large activation maps. Our intuition is that large activation maps (due to delayed downsampling) can lead to higher classiï¬cation accuracy, with all else held equal. Indeed, K. He and H. Sun applied delayed down- sampling to four different CNN architectures, and in each case delayed downsampling led to higher classiï¬cation accuracy (He & Sun, 2015).
Strategies 1 and 2 are about judiciously decreasing the quantity of parameters in a CNN while attempting to preserve accuracy. Strategy 3 is about maximizing accuracy on a limited budget of parameters. Next, we describe the Fire module, which is our building block for CNN architectures that enables us to successfully employ Strategies 1, 2, and 3. | 1602.07360#13 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 14 | 3.2 THE FIRE MODULE We deï¬ne the Fire module as follows. A Fire module is comprised of: a squeeze convolution layer (which has only 1x1 ï¬lters), feeding into an expand layer that has a mix of 1x1 and 3x3 convolution ï¬lters; we illustrate this in Figure 1. The liberal use of 1x1 ï¬lters in Fire modules is an application of Strategy 1 from Section 3.1. We expose three tunable dimensions (hyperparameters) in a Fire module: s1x1, e1x1, and e3x3. In a Fire module, s1x1 is the number of ï¬lters in the squeeze layer (all 1x1), e1x1 is the number of 1x1 ï¬lters in the expand layer, and e3x3 is the number of 3x3 ï¬lters in the expand layer. When we use Fire modules we set s1x1 to be less than (e1x1 + e3x3), so the squeeze layer helps to limit the number of input channels to the 3x3 ï¬lters, as per Strategy 2 from Section 3.1. | 1602.07360#14 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 15 | 3.3 THE SQUEEZENET ARCHITECTURE We now describe the SqueezeNet CNN architecture. We illustrate in Figure 2 that SqueezeNet begins with a standalone convolution layer (conv1), followed by 8 Fire modules (ï¬re2-9), ending with a ï¬nal conv layer (conv10). We gradually increase the number of ï¬lters per ï¬re module from the beginning to the end of the network. SqueezeNet performs max-pooling with a stride of 2 after layers conv1, ï¬re4, ï¬re8, and conv10; these relatively late placements of pooling are per Strategy 3 from Section 3.1. We present the full SqueezeNet architecture in Table 1.
3In our terminology, an âearlyâ layer is close to the input data. 4In our terminology, the âendâ of the network is the classiï¬er.
4
review as a conference paper at ICLR 2017 conv J 96 } 96 maxpool/2 maxpool/2 | fire? 128 fired 128 fired maxppol/2 256 | 384 L__,] = Pa 12 maxpool/2 maxpool/2 512 512 1000 J 2000 slobal wepo0| a ador global avgpoo! rotted Gir) dog" | 1602.07360#15 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 16 | convi, j maxpool/2 96 Citirez [convixd) 18 < fire3 128 ââ_] Tired (convixa) maxpool/2 oe [convix1| 384 on sy) maxpool/2 512 1000 global avgpool t
2017 conv convi, j } 96 maxpool/2 maxpool/2 | 96 fire? Citirez [convixd) 128 18 < fired fire3 128 128 ââ_] fired Tired (convixa) maxppol/2 maxpool/2 | oe [convix1| L__,] 384 on 12 sy) maxpool/2 maxpool/2 512 512 J 2000 1000 global avgpoo! global avgpool t Gir)
# Under review as a conference paper at ICLR 2017
conv convi, j J 96 } 96 maxpool/2 maxpool/2 maxpool/2 | 96 fire? Citirez [convixd) 128 18 < fired fire3 128 128 ââ_] fired Tired (convixa) maxppol/2 maxpool/2 256 | oe [convix1| 384 L__,] 384 = on Pa 12 sy) maxpool/2 maxpool/2 maxpool/2 512 512 512 1000 J 2000 1000 slobal wepo0| a ador global avgpoo! global avgpool t rotted Gir) dog" | 1602.07360#16 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 17 | Figure 2: Macroarchitectural view of our SqueezeNet architecture. Left: SqueezeNet (Section 3.3); Middle: SqueezeNet with simple bypass (Section 6); Right: SqueezeNet with complex bypass (Sec- tion 6).
3.3.1 OTHER SQUEEZENET DETAILS For brevity, we have omitted number of details and design choices about SqueezeNet from Table 1 and Figure 2. We provide these design choices in the following. The intuition behind these choices may be found in the papers cited below.
So that the output activations from 1x1 and 3x3 ï¬lters have the same height and width, we add a 1-pixel border of zero-padding in the input data to 3x3 ï¬lters of expand modules. ⢠ReLU (Nair & Hinton, 2010) is applied to activations from squeeze and expand layers. ⢠Dropout (Srivastava et al., 2014) with a ratio of 50% is applied after the ï¬re9 module. ⢠Note the lack of fully-connected layers in SqueezeNet; this design choice was inspired by
the NiN (Lin et al., 2013) architecture. | 1602.07360#17 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 18 | the NiN (Lin et al., 2013) architecture.
⢠When training SqueezeNet, we begin with a learning rate of 0.04, and we lin- early decrease the learning rate throughout training, as described in (Mishkin et al., 2016). For details on the training protocol (e.g. batch size, learning rate, parame- ter initialization), please refer to our Caffe-compatible conï¬guration ï¬les located here: https://github.com/DeepScale/SqueezeNet.
⢠The Caffe framework does not natively support a convolution layer that contains multiple ï¬lter resolutions (e.g. 1x1 and 3x3) (Jia et al., 2014). To get around this, we implement our expand layer with two separate convolution layers: a layer with 1x1 ï¬lters, and a layer with 3x3 ï¬lters. Then, we concatenate the outputs of these layers together in the channel dimension. This is numerically equivalent to implementing one layer that contains both 1x1 and 3x3 ï¬lters. | 1602.07360#18 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 19 | We released the SqueezeNet conï¬guration ï¬les in the format deï¬ned by the Caffe CNN frame- work. However, in addition to Caffe, several other CNN frameworks have emerged, including MXNet (Chen et al., 2015a), Chainer (Tokui et al., 2015), Keras (Chollet, 2016), and Torch (Col- lobert et al., 2011). Each of these has its own native format for representing a CNN architec- ture. That said, most of these libraries use the same underlying computational back-ends such as cuDNN (Chetlur et al., 2014) and MKL-DNN (Das et al., 2016). The research community has
5 | 1602.07360#19 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 20 | 5
Under review as a conference paper at ICLR 2017 ported the SqueezeNet CNN architecture for compatibility with a number of other CNN software frameworks: ⢠MXNet (Chen et al., 2015a) port of SqueezeNet: (Haria, 2016) ⢠Chainer (Tokui et al., 2015) port of SqueezeNet: (Bell, 2016) ⢠Keras (Chollet, 2016) port of SqueezeNet: (DT42, 2016) ⢠Torch (Collobert et al., 2011) port of SqueezeNetâs Fire Modules: (Waghmare, 2016) 4 EVALUATION OF SQUEEZENET We now turn our attention to evaluating SqueezeNet. In each of the CNN model compression papers reviewed in Section 2.1, the goal was to compress an AlexNet (Krizhevsky et al., 2012) model that was trained to classify images using the ImageNet (Deng et al., 2009) (ILSVRC 2012) dataset. Therefore, we use AlexNet5 and the associated model compression results as a basis for comparison when evaluating SqueezeNet. Table 1: SqueezeNet architectural dimensions. (The formatting of this table was inspired by the Inception2 paper (Ioffe & Szegedy, 2015).)
# Under review as a conference paper at ICLR 2017 | 1602.07360#20 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 21 | # Under review as a conference paper at ICLR 2017
4 EVALUATION OF SQUEEZENET We now turn our attention to evaluating SqueezeNet. In each of the CNN model compression papers reviewed in Section 2.1, the goal was to compress an AlexNet (Krizhevsky et al., 2012) model that was trained to classify images using the ImageNet (Deng et al., 2009) (ILSVRC 2012) dataset. Therefore, we use AlexNet5 and the associated model compression results as a basis for comparison when evaluating SqueezeNet. | 1602.07360#21 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 22 | input image | 224x224x3 conv 111x111x96 | 7x7/2 (x96) 1 100% (7x7) 6bit 14,208 14,208 maxpool1 55x55x96 3x3/2 0 fire2 55x55x128 2 16 64 64 100% 100% 33% 6bit 11,920 5,746 fire3 55x55x128 2 16 64 64 100% 100% 33% 6bit 12,432 6,258 fired 55x55x256 2 32 128 128 100% 100% 33% 6bit 45,344 20,646 maxpool4 | 27x27x256 3x3/2 0 fires 2727x256 2 32 128 128 100% 100% 33% 6bit 49,440 24,742 fire 27x27x384 2 48 192 192 100% 50% 33% 6bit 104,880 44,700 fire7 2727x384 2 48 192 192 50% 100% 33% 6bit 111,024 46,236 fires 27x27x512 2 64 256 256 100% 50% 33% 6bit 188,992 77,581 maxpool8 13x12x512 3x3/2 0 fired 13x13x512 2 64 256 256 50% 100% 30% 6bit 197,184 77,581 conv10 13x13x1000 } 1x1/1 (x1000) 1 20% | 1602.07360#22 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 24 | In Table 2, we review SqueezeNet in the context of recent model compression results. The SVD- based approach is able to compress a pretrained AlexNet model by a factor of 5x, while diminishing top-1 accuracy to 56.0% (Denton et al., 2014). Network Pruning achieves a 9x reduction in model size while maintaining the baseline of 57.2% top-1 and 80.3% top-5 accuracy on ImageNet (Han et al., 2015b). Deep Compression achieves a 35x reduction in model size while still maintaining the baseline accuracy level (Han et al., 2015a). Now, with SqueezeNet, we achieve a 50X reduction in model size compared to AlexNet, while meeting or exceeding the top-1 and top-5 accuracy of AlexNet. We summarize all of the aforementioned results in Table 2. | 1602.07360#24 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 25 | It appears that we have surpassed the state-of-the-art results from the model compression commu- nity: even when using uncompressed 32-bit values to represent the model, SqueezeNet has a 1.4à smaller model size than the best efforts from the model compression community while maintain- ing or exceeding the baseline accuracy. Until now, an open question has been: are small models amenable to compression, or do small models âneedâ all of the representational power afforded by dense ï¬oating-point values? To ï¬nd out, we applied Deep Compression (Han et al., 2015a)
5Our baseline is bvlc alexnet from the Caffe codebase (Jia et al., 2014).
6
# Under review as a conference paper at ICLR 2017
Table 2: Comparing SqueezeNet to model compression approaches. By model size, we mean the number of bytes required to store all of the parameters in the trained model. Reduction in Model Size vs. AlexNet 1x 5x | 1602.07360#25 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 26 | Top-1 ImageNet Accuracy 57.2% 56.0% CNN architecture Original â Compressed Model Size 240MB 240MB â 48MB Data Type Compression Approach None (baseline) SVD (Denton et al., 2014) Network Pruning (Han et al., 2015b) Deep Compression (Han et al., 2015a) None Deep Compression Deep Compression 32 bit 32 bit AlexNet AlexNet 32 bit 240MB â 27MB 9x 57.2% AlexNet 240MB â 6.9MB 5-8 bit 57.2% AlexNet 35x 32 bit 8 bit 6 bit 4.8MB 4.8MB â 0.66MB 4.8MB â 0.47MB 50x 363x 510x SqueezeNet (ours) SqueezeNet (ours) SqueezeNet (ours) 57.5% 57.5% 57.5% Top-5 ImageNet Accuracy 80.3% 79.4% 80.3% 80.3% 80.3% 80.3% 80.3% | 1602.07360#26 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 27 | to SqueezeNet, using 33% sparsity6 and 8-bit quantization. This yields a 0.66 MB model (363Ã smaller than 32-bit AlexNet) with equivalent accuracy to AlexNet. Further, applying Deep Compres- sion with 6-bit quantization and 33% sparsity on SqueezeNet, we produce a 0.47MB model (510Ã smaller than 32-bit AlexNet) with equivalent accuracy. Our small model is indeed amenable to compression.
In addition, these results demonstrate that Deep Compression (Han et al., 2015a) not only works well on CNN architectures with many parameters (e.g. AlexNet and VGG), but it is also able to compress the already compact, fully convolutional SqueezeNet architecture. Deep Compression compressed SqueezeNet by 10Ã while preserving the baseline accuracy. In summary: by combin- ing CNN architectural innovation (SqueezeNet) with state-of-the-art compression techniques (Deep Compression), we achieved a 510Ã reduction in model size with no decrease in accuracy compared to the baseline. | 1602.07360#27 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 28 | Finally, note that Deep Compression (Han et al., 2015b) uses a codebook as part of its scheme for quantizing CNN parameters to 6- or 8-bits of precision. Therefore, on most commodity processors, it is not trivial to achieve a speedup of 32 6 = 5.3x with 6-bit quantization using the scheme developed in Deep Compression. However, Han et al. developed custom hardware â Efï¬cient Inference Engine (EIE) â that can compute codebook-quantized CNNs more efï¬ciently (Han et al., 2016a). In addition, in the months since we released SqueezeNet, P. Gysel developed a strategy called Ristretto for linearly quantizing SqueezeNet to 8 bits (Gysel, 2016). Speciï¬cally, Ristretto does computation in 8 bits, and it stores parameters and activations in 8-bit data types. Using the Ristretto strategy for 8-bit computation in SqueezeNet inference, Gysel observed less than 1 percentage-point of drop in accuracy when using 8-bit instead of 32-bit data types.
# 5 CNN MICROARCHITECTURE DESIGN SPACE EXPLORATION | 1602.07360#28 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 29 | # 5 CNN MICROARCHITECTURE DESIGN SPACE EXPLORATION
So far, we have proposed architectural design strategies for small models, followed these principles to create SqueezeNet, and discovered that SqueezeNet is 50x smaller than AlexNet with equivalent accuracy. However, SqueezeNet and other models reside in a broad and largely unexplored design space of CNN architectures. Now, in Sections 5 and 6, we explore several aspects of the design space. We divide this architectural exploration into two main topics: microarchitectural exploration (per-module layer dimensions and conï¬gurations) and macroarchitectural exploration (high-level end-to-end organization of modules and other layers).
In this section, we design and execute experiments with the goal of providing intuition about the shape of the microarchitectural design space with respect to the design strategies that we proposed in Section 3.1. Note that our goal here is not to maximize accuracy in every experiment, but rather to understand the impact of CNN architectural choices on model size and accuracy.
6Note that, due to the storage overhead of storing sparse matrix indices, 33% sparsity leads to somewhat less than a 3Ã decrease in model size.
7
# Under review as a conference paper at ICLR 2017 | 1602.07360#29 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 30 | 7
# Under review as a conference paper at ICLR 2017
Squeeze Ratio (SR) Percentage of 3x3 filters (pet;,) 0.1250.25 0.5 0.75 1.0 100 1.0 12.5 25.0 37.5 50.0 62.5 75.0 87.5 99.0 g 100/-âsqueezeNet 85.3% 860% gS 85:3% 85.3 ne 80.3% accuracy accuracy ~ 76.3% accuracy accuracy > accuracy 2 30 accuracy g 80+ 13 MB of 19 MB of i g r ° 13 MB of : 21 MB of 3 âleah weights weights 3 5.7 MB of weights weights & 3 ight: © ot ® Gop weights 2 8 © 40h Q 4ob ot a 2 @ = 201 Z 201 a a E E = 0 ii : L : =o er 48 7.6 13 19 24 5.77.4 9.3 11 13 15 17 19 21 MB of weights in model MB of weights in model the of the ratio the of the ratio of 3x3 filters
(a) Exploring the impact of the squeeze ratio (SR) on model size and accuracy. (b) Exploring the impact of the ratio of 3x3 ï¬lters in expand layers (pct3x3) on model size and accuracy. | 1602.07360#30 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 32 | 5.1 CNN MICROARCHITECTURE METAPARAMETERS In SqueezeNet, each Fire module has three dimensional hyperparameters that we deï¬ned in Sec- tion 3.2: s1x1, e1x1, and e3x3. SqueezeNet has 8 Fire modules with a total of 24 dimensional hyperparameters. To do broad sweeps of the design space of SqueezeNet-like architectures, we deï¬ne the following set of higher level metaparameters which control the dimensions of all Fire modules in a CNN. We deï¬ne basee as the number of expand ï¬lters in the ï¬rst Fire module in a CNN. After every f req Fire modules, we increase the number of expand ï¬lters by incre. In other words, for Fire module i, the number of expand ï¬lters is ei = basee + (incre â ). In the expand layer of a Fire module, some ï¬lters are 1x1 and some are 3x3; we deï¬ne ei = ei,1x1 + ei,3x3 with pct3x3 (in the range [0, 1], shared over all Fire modules) as the percentage of expand ï¬lters that are | 1602.07360#32 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 33 | ei,3x3 with pct3x3 (in the range [0, 1], shared over all Fire modules) as the percentage of expand ï¬lters that are 3x3. In other words, ei,3x3 = ei â pct3x3, and ei,1x1 = ei â (1 â pct3x3). Finally, we deï¬ne the number of ï¬lters in the squeeze layer of a Fire module using a metaparameter called the squeeze ratio (SR) (again, in the range [0, 1], shared by all Fire modules): si,1x1 = SR â ei (or equivalently si,1x1 = SR â (ei,1x1 + ei,3x3)). SqueezeNet (Table 1) is an example architecture that we gen- erated with the aforementioned set of metaparameters. Speciï¬cally, SqueezeNet has the following metaparameters: basee = 128, incre = 128, pct3x3 = 0.5, f req = 2, and SR = 0.125. | 1602.07360#33 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 34 | 5.2 SQUEEZE RATIO In Section 3.1, we proposed decreasing the number of parameters by using squeeze layers to decrease the number of input channels seen by 3x3 ï¬lters. We deï¬ned the squeeze ratio (SR) as the ratio between the number of ï¬lters in squeeze layers and the number of ï¬lters in expand layers. We now design an experiment to investigate the effect of the squeeze ratio on model size and accuracy. | 1602.07360#34 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 35 | In these experiments, we use SqueezeNet (Figure 2) as a starting point. As in SqueezeNet, these experiments use the following metaparameters: basee = 128, incre = 128, pct3x3 = 0.5, and f req = 2. We train multiple models, where each model has a different squeeze ratio (SR)7 in the range [0.125, 1.0]. In Figure 3(a), we show the results of this experiment, where each point on the graph is an independent model that was trained from scratch. SqueezeNet is the SR=0.125 point in this ï¬gure.8 From this ï¬gure, we learn that increasing SR beyond 0.125 can further increase ImageNet top-5 accuracy from 80.3% (i.e. AlexNet-level) with a 4.8MB model to 86.0% with a 19MB model. Accuracy plateaus at 86.0% with SR=0.75 (a 19MB model), and setting SR=1.0 further increases model size without improving accuracy. | 1602.07360#35 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 36 | 5.3 TRADING OFF 1X1 AND 3X3 FILTERS In Section 3.1, we proposed decreasing the number of parameters in a CNN by replacing some 3x3 ï¬lters with 1x1 ï¬lters. An open question is, how important is spatial resolution in CNN ï¬lters?
7Note that, for a given model, all Fire layers share the same squeeze ratio. 8Note that we named it SqueezeNet because it has a low squeeze ratio (SR). That is, the squeeze layers in
SqueezeNet have 0.125x the number of ï¬lters as the expand layers.
8
# Under review as a conference paper at ICLR 2017
The VGG (Simonyan & Zisserman, 2014) architectures have 3x3 spatial resolution in most layersâ ï¬lters; GoogLeNet (Szegedy et al., 2014) and Network-in-Network (NiN) (Lin et al., 2013) have 1x1 ï¬lters in some layers. In GoogLeNet and NiN, the authors simply propose a speciï¬c quantity of 1x1 and 3x3 ï¬lters without further analysis.9 Here, we attempt to shed light on how the proportion of 1x1 and 3x3 ï¬lters affects model size and accuracy. | 1602.07360#36 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 37 | We use the following metaparameters in this experiment: basee = incre = 128, f req = 2, SR = 0.500, and we vary pct3x3 from 1% to 99%. In other words, each Fire moduleâs expand layer has a predeï¬ned number of ï¬lters partitioned between 1x1 and 3x3, and here we turn the knob on these ï¬lters from âmostly 1x1â to âmostly 3x3â. As in the previous experiment, these models have 8 Fire modules, following the same organization of layers as in Figure 2. We show the results of this experiment in Figure 3(b). Note that the 13MB models in Figure 3(a) and Figure 3(b) are the same architecture: SR = 0.500 and pct3x3 = 50%. We see in Figure 3(b) that the top-5 accuracy plateaus at 85.6% using 50% 3x3 ï¬lters, and further increasing the percentage of 3x3 ï¬lters leads to a larger model size but provides no improvement in accuracy on ImageNet. | 1602.07360#37 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 38 | 6 CNN MACROARCHITECTURE DESIGN SPACE EXPLORATION So far we have explored the design space at the microarchitecture level, i.e. the contents of individual modules of the CNN. Now, we explore design decisions at the macroarchitecture level concerning the high-level connections among Fire modules. Inspired by ResNet (He et al., 2015b), we explored three different architectures:
Vanilla SqueezeNet (as per the prior sections). ⢠SqueezeNet with simple bypass connections between some Fire modules. (Inspired by (Srivastava et al., 2015; He et al., 2015b).)
SqueezeNet with complex bypass connections between the remaining Fire modules.
We illustrate these three variants of SqueezeNet in Figure 2.
Our simple bypass architecture adds bypass connections around Fire modules 3, 5, 7, and 9, requiring these modules to learn a residual function between input and output. As in ResNet, to implement a bypass connection around Fire3, we set the input to Fire4 equal to (output of Fire2 + output of Fire3), where the + operator is elementwise addition. This changes the regularization applied to the parameters of these Fire modules, and, as per ResNet, can improve the ï¬nal accuracy and/or ability to train the full model. | 1602.07360#38 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 39 | One limitation is that, in the straightforward case, the number of input channels and number of output channels has to be the same; as a result, only half of the Fire modules can have simple bypass connections, as shown in the middle diagram of Fig 2. When the âsame number of channelsâ requirement canât be met, we use a complex bypass connection, as illustrated on the right of Figure 2. While a simple bypass is âjust a wire,â we deï¬ne a complex bypass as a bypass that includes a 1x1 convolution layer with the number of ï¬lters set equal to the number of output channels that are needed. Note that complex bypass connections add extra parameters to the model, while simple bypass connections do not.
In addition to changing the regularization, it is intuitive to us that adding bypass connections would help to alleviate the representational bottleneck introduced by squeeze layers. In SqueezeNet, the squeeze ratio (SR) is 0.125, meaning that every squeeze layer has 8x fewer output channels than the accompanying expand layer. Due to this severe dimensionality reduction, a limited amount of in- formation can pass through squeeze layers. However, by adding bypass connections to SqueezeNet, we open up avenues for information to ï¬ow around the squeeze layers. | 1602.07360#39 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 40 | We trained SqueezeNet with the three macroarchitectures in Figure 2 and compared the accuracy and model size in Table 3. We ï¬xed the microarchitecture to match SqueezeNet as described in Table 1 throughout the macroarchitecture exploration. Complex and simple bypass connections both yielded an accuracy improvement over the vanilla SqueezeNet architecture. Interestingly, the simple bypass enabled a higher accuracy accuracy improvement than complex bypass. Adding the
9To be clear, each ï¬lter is 1x1xChannels or 3x3xChannels, which we abbreviate to 1x1 and 3x3.
9
# Under review as a conference paper at ICLR 2017
Table 3: SqueezeNet accuracy and model size using different macroarchitecture conï¬gurations Top-1 Accuracy 57.5% 60.4% 58.8%
Architecture Vanilla SqueezeNet SqueezeNet + Simple Bypass SqueezeNet + Complex Bypass Top-5 Accuracy Model Size 80.3% 82.5% 82.0% 4.8MB 4.8MB 7.7MB
simple bypass connections yielded an increase of 2.9 percentage-points in top-1 accuracy and 2.2 percentage-points in top-5 accuracy without increasing model size. | 1602.07360#40 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 41 | simple bypass connections yielded an increase of 2.9 percentage-points in top-1 accuracy and 2.2 percentage-points in top-5 accuracy without increasing model size.
7 CONCLUSIONS In this paper, we have proposed steps toward a more disciplined approach to the design-space explo- ration of convolutional neural networks. Toward this goal we have presented SqueezeNet, a CNN architecture that has 50Ã fewer parameters than AlexNet and maintains AlexNet-level accuracy on ImageNet. We also compressed SqueezeNet to less than 0.5MB, or 510Ã smaller than AlexNet without compression. Since we released this paper as a technical report in 2016, Song Han and his collaborators have experimented further with SqueezeNet and model compression. Using a new approach called Dense-Sparse-Dense (DSD) (Han et al., 2016b), Han et al. use model compres- sion during training as a regularizer to further improve accuracy, producing a compressed set of SqueezeNet parameters that is 1.2 percentage-points more accurate on ImageNet-1k, and also pro- ducing an uncompressed set of SqueezeNet parameters that is 4.3 percentage-points more accurate, compared to our results in Table 2. | 1602.07360#41 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 42 | We mentioned near the beginning of this paper that small models are more amenable to on-chip implementations on FPGAs. Since we released the SqueezeNet model, Gschwend has developed a variant of SqueezeNet and implemented it on an FPGA (Gschwend, 2016). As we anticipated, Gschwend was able to able to store the parameters of a SqueezeNet-like model entirely within the FPGA and eliminate the need for off-chip memory accesses to load model parameters. | 1602.07360#42 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 43 | In the context of this paper, we focused on ImageNet as a target dataset. However, it has become common practice to apply ImageNet-trained CNN representations to a variety of applications such as ï¬ne-grained object recognition (Zhang et al., 2013; Donahue et al., 2013), logo identiï¬cation in images (Iandola et al., 2015), and generating sentences about images (Fang et al., 2015). ImageNet- trained CNNs have also been applied to a number of applications pertaining to autonomous driv- ing, including pedestrian and vehicle detection in images (Iandola et al., 2014; Girshick et al., 2015; Ashraf et al., 2016) and videos (Chen et al., 2015b), as well as segmenting the shape of the road (Badrinarayanan et al., 2015). We think SqueezeNet will be a good candidate CNN architecture for a variety of applications, especially those in which small model size is of importance.
SqueezeNet is one of several new CNNs that we have discovered while broadly exploring the de- sign space of CNN architectures. We hope that SqueezeNet will inspire the reader to consider and explore the broad range of possibilities in the design space of CNN architectures and to perform that exploration in a more systematic manner.
# REFERENCES | 1602.07360#43 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 44 | # REFERENCES
Khalid Ashraf, Bichen Wu, Forrest N. Iandola, Matthew W. Moskewicz, and Kurt Keutzer. Shallow networks for high-accuracy road object-detection. arXiv:1606.01561, 2016.
Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. SegNet: A deep convolutional encoder- decoder architecture for image segmentation. arxiv:1511.00561, 2015.
Eddie Bell. A implementation of squeezenet in chainer. https://github.com/ejlb/ squeezenet-chainer, 2016.
J. Bergstra and Y. Bengio. An optimization methodology for neural network weights and architec- tures. JMLR, 2012.
Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. Mxnet: A ï¬exible and efï¬cient machine learning library for heterogeneous distributed systems. arXiv:1512.01274, 2015a.
10
# Under review as a conference paper at ICLR 2017 | 1602.07360#44 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 45 | 10
# Under review as a conference paper at ICLR 2017
Xiaozhi Chen, Kaustav Kundu, Yukun Zhu, Andrew G Berneshawi, Huimin Ma, Sanja Fidler, and Raquel Urtasun. 3d object proposals for accurate object class detection. In NIPS, 2015b.
Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catan- zaro, and Evan Shelhamer. cuDNN: efï¬cient primitives for deep learning. arXiv:1410.0759, 2014.
Francois Chollet. Keras: Deep learning library for theano and tensorï¬ow. https://keras.io, 2016.
Ronan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like environment for machine learning. In NIPS BigLearn Workshop, 2011.
Consumer Reports. Teslas new needs Better http://www.consumerreports.org/tesla/ autopilot: but still improvement. tesla-new-autopilot-better-but-needs-improvement, 2016. | 1602.07360#45 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 46 | Dipankar Das, Sasikanth Avancha, Dheevatsa Mudigere, Karthikeyan Vaidyanathan, Srinivas Srid- haran, Dhiraj D. Kalamkar, Bharat Kaul, and Pradeep Dubey. Distributed deep learning using synchronous stochastic gradient descent. arXiv:1602.06709, 2016.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009.
E.L Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus. Exploiting linear structure within convolutional networks for efï¬cient evaluation. In NIPS, 2014.
Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. arXiv:1310.1531, 2013.
DT42. Squeezenet keras implementation. https://github.com/DT42/squeezenet_ demo, 2016. | 1602.07360#46 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 47 | DT42. Squeezenet keras implementation. https://github.com/DT42/squeezenet_ demo, 2016.
Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. From captions to visual concepts and back. In CVPR, 2015.
Ross B. Girshick, Forrest N. Iandola, Trevor Darrell, and Jitendra Malik. Deformable part models are convolutional neural networks. In CVPR, 2015.
David Gschwend. Zynqnet: An fpga-accelerated embedded convolutional neural network. Masterâs thesis, Swiss Federal Institute of Technology Zurich (ETH-Zurich), 2016.
Philipp Gysel. Ristretto: Hardware-oriented approximation of convolutional neural networks. arXiv:1605.06402, 2016.
S. Han, H. Mao, and W. Dally. Deep compression: Compressing DNNs with pruning, trained quantization and huffman coding. arxiv:1510.00149v3, 2015a. | 1602.07360#47 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 48 | S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efï¬cient neural networks. In NIPS, 2015b.
Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. Eie: Efï¬cient inference engine on compressed deep neural network. International Sympo- sium on Computer Architecture (ISCA), 2016a.
Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Shijian Tang, Erich Elsen, Bryan Catanzaro, John Tran, and William J. Dally. Dsd: Regularizing deep neural networks with dense-sparse-dense training ï¬ow. arXiv:1607.04381, 2016b.
Guo Haria. convert squeezenet to mxnet. https://github.com/haria/SqueezeNet/ commit/0cf57539375fd5429275af36fc94c774503427c3, 2016.
11
# Under review as a conference paper at ICLR 2017 | 1602.07360#48 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 49 | 11
# Under review as a conference paper at ICLR 2017
K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ers: Surpassing human-level perfor- mance on imagenet classiï¬cation. In ICCV, 2015a.
Kaiming He and Jian Sun. Convolutional neural networks at constrained time cost. In CVPR, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. arXiv:1512.03385, 2015b.
Forrest N. Iandola, Matthew W. Moskewicz, Sergey Karayev, Ross B. Girshick, Trevor Darrell, and Kurt Keutzer. Densenet: Implementing efï¬cient convnet descriptor pyramids. arXiv:1404.1869, 2014.
Forrest N. Iandola, Anting Shen, Peter Gao, and Kurt Keutzer. DeepLogo: Hitting logo recognition with the deep neural network hammer. arXiv:1510.02131, 2015. | 1602.07360#49 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 50 | Forrest N. Iandola, Khalid Ashraf, Matthew W. Moskewicz, and Kurt Keutzer. FireCaffe: near-linear acceleration of deep neural network training on compute clusters. In CVPR, 2016.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. JMLR, 2015.
Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser- gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed- ding. arXiv:1408.5093, 2014.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet Classiï¬cation with Deep Con- volutional Neural Networks. In NIPS, 2012.
Y. LeCun, B.Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, and L.D. Jackel. Back- propagation applied to handwritten zip code recognition. Neural Computation, 1989.
Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv:1312.4400, 2013. | 1602.07360#50 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 51 | Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv:1312.4400, 2013.
T.B. Ludermir, A. Yamazaki, and C. Zanchettin. An optimization methodology for neural network weights and architectures. IEEE Trans. Neural Networks, 2006.
Dmytro Mishkin, Nikolay Sergievskiy, and Jiri Matas. Systematic evaluation of cnn advances on the imagenet. arXiv:1606.02228, 2016.
Vinod Nair and Geoffrey E. Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In ICML, 2010.
Jiantao Qiu, Jie Wang, Song Yao, Kaiyuan Guo, Boxun Li, Erjin Zhou, Jincheng Yu, Tianqi Tang, Ningyi Xu, Sen Song, Yu Wang, and Huazhong Yang. Going deeper with embedded fpga platform for convolutional neural network. In ACM International Symposium on FPGA, 2016.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014.
J. Snoek, H. Larochelle, and R.P. Adams. Practical bayesian optimization of machine learning algorithms. In NIPS, 2012. | 1602.07360#51 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 52 | J. Snoek, H. Larochelle, and R.P. Adams. Practical bayesian optimization of machine learning algorithms. In NIPS, 2012.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬tting. JMLR, 2014.
R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway networks. In ICML Deep Learning Workshop, 2015.
K.O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Neu- rocomputing, 2002.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv:1409.4842, 2014.
12
# Under review as a conference paper at ICLR 2017
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re- thinking the inception architecture for computer vision. arXiv:1512.00567, 2015. | 1602.07360#52 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07360 | 53 | Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv:1602.07261, 2016.
S. Tokui, K. Oono, S. Hido, and J. Clayton. Chainer: a next-generation open source framework for deep learning. In NIPS Workshop on Machine Learning Systems (LearningSys), 2015.
Sagar M Waghmare. FireModule.lua. https://github.com/Element-Research/dpnn/ blob/master/FireModule.lua, 2016.
Ning Zhang, Ryan Farrell, Forrest Iandola, and Trevor Darrell. Deformable part descriptors for ï¬ne-grained recognition and attribute prediction. In ICCV, 2013.
13 | 1602.07360#53 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | [
{
"id": "1512.00567"
},
{
"id": "1606.02228"
},
{
"id": "1602.07261"
},
{
"id": "1512.01274"
},
{
"id": "1511.00561"
},
{
"id": "1602.06709"
},
{
"id": "1607.04381"
},
{
"id": "1510.02131"
},
{
"id": "1512.03385"
},
{
"id": "1605.06402"
},
{
"id": "1606.01561"
},
{
"id": "1510.00149"
}
] |
1602.07261 | 1 | # Abstract
Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at rel- atively low computational cost. Recently, the introduction of residual connections in conjunction with a more tradi- tional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any beneï¬t in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks signiï¬cantly. There is also some evidence of residual Incep- tion networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These varia- tions improve the single-frame recognition performance on the ILSVRC 2012 classiï¬cation task signiï¬cantly. We fur- ther demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08% top-5 error on the test set of the ImageNet classiï¬cation (CLS) challenge. | 1602.07261#1 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 2 | tion [7], object tracking [18], and superresolution [3]. These examples are but a few of all the applications to which deep convolutional networks have been very successfully applied ever since.
In this work we study the combination of the two most recent ideas: Residual connections introduced by He et al. in [5] and the latest revised version of the Inception archi- tecture [15]. In [5], it is argued that residual connections are of inherent importance for training very deep architectures. Since Inception networks tend to be very deep, it is natu- ral to replace the ï¬lter concatenation stage of the Inception architecture with residual connections. This would allow Inception to reap all the beneï¬ts of the residual approach while retaining its computational efï¬ciency. | 1602.07261#2 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 3 | Besides a straightforward integration, we have also stud- ied whether Inception itself can be made more efï¬cient by making it deeper and wider. For that purpose, we designed a new version named Inception-v4 which has a more uni- form simpliï¬ed architecture and more inception modules than Inception-v3. Historically, Inception-v3 had inherited a lot of the baggage of the earlier incarnations. The techni- cal constraints chieï¬y came from the need for partitioning the model for distributed training using DistBelief [2]. Now, after migrating our training setup to TensorFlow [1] these constraints have been lifted, which allowed us to simplify the architecture signiï¬cantly. The details of that simpliï¬ed architecture are described in Section 3.
# 1. Introduction | 1602.07261#3 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 4 | # 1. Introduction
Since the 2012 ImageNet competition [11] winning en- try by Krizhevsky et al [8], their network âAlexNetâ has been successfully applied to a larger variety of computer vision tasks, for example to object-detection [4], segmen- tation [10], human pose estimation [17], video classiï¬caIn this report, we will compare the two pure Inception variants, Inception-v3 and v4, with similarly expensive hy- brid Inception-ResNet versions. Admittedly, those mod- els were picked in a somewhat ad hoc manner with the main constraint being that the parameters and computa- tional complexity of the models should be somewhat similar to the cost of the non-residual models. In fact we have tested bigger and wider Inception-ResNet variants and they per- formed very similarly on the ImageNet classiï¬cation chal1
# lenge [11] dataset. | 1602.07261#4 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 5 | # lenge [11] dataset.
The last experiment reported here is an evaluation of an ensemble of all the best performing models presented here. As it was apparent that both Inception-v4 and Inception- ResNet-v2 performed similarly well, exceeding state-of- the art single frame performance on the ImageNet valida- tion dataset, we wanted to see how a combination of those pushes the state of the art on this well studied dataset. Sur- prisingly, we found that gains on the single-frame perfor- mance do not translate into similarly large gains on ensem- bled performance. Nonetheless, it still allows us to report 3.1% top-5 error on the validation set with four models en- sembled setting a new state of the art, to our best knowl- edge.
In the last section, we study some of the classiï¬cation failures and conclude that the ensemble still has not reached the label noise of the annotations on this dataset and there is still room for improvement for the predictions.
# 2. Related Work | 1602.07261#5 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 7 | Residual connection were introduced by He et al. in [5] in which they give convincing theoretical and practical ev- idence for the advantages of utilizing additive merging of signals both for image recognition, and especially for object detection. The authors argue that residual connections are inherently necessary for training very deep convolutional models. Our ï¬ndings do not seem to support this view, at least for image recognition. However it might require more measurement points with deeper architectures to understand the true extent of beneï¬cial aspects offered by residual con- nections. In the experimental section we demonstrate that it is not very difï¬cult to train competitive very deep net- works without utilizing residual connections. However the use of residual connections seems to improve the training speed greatly, which is alone a great argument for their use. The Inception deep convolutional architecture was intro- duced in [14] and was called GoogLeNet or Inception-v1 in our exposition. Later the Inception architecture was reï¬ned in various ways, ï¬rst by the introduction of batch normaliza- tion [6] (Inception-v2) by Ioffe et al. Later the architecture was improved by additional factorization ideas in the third iteration [15] which will be referred to as Inception-v3 in this report.
Relu activation + Conv Conv Relu activation | 1602.07261#7 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
1602.07261 | 9 | Our older Inception models used to be trained in a par- titioned manner, where each replica was partitioned into a multiple sub-networks in order to be able to ï¬t the whole model in memory. However, the Inception architecture is highly tunable, meaning that there are a lot of possible changes to the number of ï¬lters in the various layers that do not affect the quality of the fully trained network. In order to optimize the training speed, we used to tune the layer sizes carefully in order to balance the computation be- tween the various model sub-networks. In contrast, with the introduction of TensorFlow our most recent models can be trained without partitioning the replicas. This is enabled in part by recent optimizations of memory used by backprop- agation, achieved by carefully considering what tensors are needed for gradient computation and structuring the computation to reduce the number of such tensors. Historically, we have been relatively conservative about changing the archi- tectural choices and restricted our experiments to varying isolated network components while keeping the rest of the network stable. Not simplifying earlier choices resulted in networks that looked more complicated that they needed to be. In our newer | 1602.07261#9 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | [
{
"id": "1512.00567"
},
{
"id": "1512.03385"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.