doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1609.02200 | 118 | â
â
Next, we further restrict the neural network deï¬ning the distribution over the observed variables x given the smoothing variables ζ to consist of a linear transformation followed by a pointwise logistic nonlinearity, analogous to a sigmoid belief network (SBN; Spiegelhalter & Lauritzen, 1990; Neal, 88.8 with 1992). This decreases the negative log-likelihood to 200 RBM units.
We then remove the lateral connections in the RBM, reducing it to a set of independent binary random variables. The resulting network is a noisy sigmoid belief network. That is, samples are produced by drawing samples from the independent binary random variables, multiplying by an independent noise source, and then sampling from the observed variables as in a standard SBN. With this SBN-like architecture, the discrete variational autoencoder achieves a log-likelihood of
â Finally, we replace the hierarchical approximating posterior of Figure 3a with the factorial approxi- mating posterior of Figure 1a. This simpliï¬cation of the approximating posterior, in addition to the prior, reduces the log-likelihood to
â
21In all cases, we report the negative log-likelihood on statically binarized MNIST (Salakhutdinov & Mur- ray, 2008), estimated with 104 importance weighted samples (Burda et al., 2016).
31
Published as a conference paper at ICLR 2017 | 1609.02200#118 | Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | http://arxiv.org/pdf/1609.02200 | Jason Tyler Rolfe | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160907 | 20170422 | [
{
"id": "1602.08734"
},
{
"id": "1602.02311"
},
{
"id": "1511.06499"
},
{
"id": "1607.05690"
},
{
"id": "1511.05644"
},
{
"id": "1509.00519"
},
{
"id": "1506.04557"
}
] |
1609.02200 | 119 | 31
Published as a conference paper at ICLR 2017
Sk Oya o
Figure 12: Evolution of samples from a discrete VAE trained on Omniglot, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM.
Ce et | oe 4 moe COP < Sk Goes eeer RAK Eun | doddad gada aaaa atte Asad? ety? fete vere wee RHha eh ere
Figure 13: Evolution of samples from a discrete VAE trained on Caltech-101 Silhouettes, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. Vertical sequences in which the silhouette shape remains similar demonstrate that the RBM has distinct modes, each of which corresponds to a single silhouette type, despite being trained in a wholly unsupervised manner.
32
Published as a conference paper at ICLR 2017 | 1609.02200#119 | Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | http://arxiv.org/pdf/1609.02200 | Jason Tyler Rolfe | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160907 | 20170422 | [
{
"id": "1602.08734"
},
{
"id": "1602.02311"
},
{
"id": "1511.06499"
},
{
"id": "1607.05690"
},
{
"id": "1511.05644"
},
{
"id": "1509.00519"
},
{
"id": "1506.04557"
}
] |
1609.02200 | 120 | 32
Published as a conference paper at ICLR 2017
Figures 11, 12, and 13 repeat the analysis of Figure 5 for statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes. Speciï¬cally, they show the generative output of a discrete VAE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held constant across each sub-row of ï¬ve samples, and variation amongst these samples is due to the layers of continuous latent variables. Given a multimodal distribution with well-separated modes, Gibbs sampling passes through the large, low-probability space between the modes only infrequently. As a result, consistency of the object class over many successive rows in Figures 11, 12, and 13 indicates that the RBM prior has well-separated modes. | 1609.02200#120 | Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | http://arxiv.org/pdf/1609.02200 | Jason Tyler Rolfe | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160907 | 20170422 | [
{
"id": "1602.08734"
},
{
"id": "1602.02311"
},
{
"id": "1511.06499"
},
{
"id": "1607.05690"
},
{
"id": "1511.05644"
},
{
"id": "1509.00519"
},
{
"id": "1506.04557"
}
] |
1609.02200 | 121 | On statically binarized MNIST, the RBM still learns distinct, separated modes corresponding to most of the different digit types. However, these modes are not as well separated as in dynamically binarized MNIST, as is evident from the more rapid switching between digit types in Figure 11. There are not obvious modes for Omniglot in Figure 12; it is plausible that an RBM with 128 units could not represent enough well-separated modes to capture the large number of distinct character types in the Omniglot dataset. On Caltech-101 Silhouettes, there may be a mode corresponding to large, roughly convex blobs.
33 | 1609.02200#121 | Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | http://arxiv.org/pdf/1609.02200 | Jason Tyler Rolfe | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160907 | 20170422 | [
{
"id": "1602.08734"
},
{
"id": "1602.02311"
},
{
"id": "1511.06499"
},
{
"id": "1607.05690"
},
{
"id": "1511.05644"
},
{
"id": "1509.00519"
},
{
"id": "1506.04557"
}
] |
1608.08710 | 0 | 7 1 0 2
r a M 0 1 ] V C . s c [
3 v 0 1 7 8 0 . 8 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# PRUNING FILTERS FOR EFFICIENT CONVNETS
Hao Liâ University of Maryland [email protected]
Asim Kadav NEC Labs America [email protected]
Igor Durdanovic NEC Labs America [email protected]
Hanan Sametâ University of Maryland [email protected]
Hans Peter Graf NEC Labs America [email protected]
# ABSTRACT | 1608.08710#0 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 1 | Hans Peter Graf NEC Labs America [email protected]
# ABSTRACT
The success of CNNs in various applications is accompanied by a signiï¬cant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a signiï¬cant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune ï¬lters from CNNs that are identiï¬ed as having a small effect on the output accuracy. By removing whole ï¬lters in the network together with their connecting feature maps, the computation costs are reduced signiï¬cantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efï¬cient BLAS libraries for dense matrix multiplications. We show that even simple ï¬lter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks. | 1608.08710#1 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 2 | # INTRODUCTION
The ImageNet challenge has led to signiï¬cant advancements in exploring various architectural choices in CNNs (Russakovsky et al. (2015); Krizhevsky et al. (2012); Simonyan & Zisserman (2015); Szegedy et al. (2015a); He et al. (2016)). The general trend since the past few years has been that the networks have grown deeper, with an overall increase in the number of parameters and convolution operations. These high capacity networks have signiï¬cant inference costs especially when used with embedded sensors or mobile devices where computational and power resources may be limited. For these applications, in addition to accuracy, computational efï¬ciency and small network sizes are crucial enabling factors (Szegedy et al. (2015b)). In addition, for web services that provide image search and image classiï¬cation APIs that operate on a time budget often serving hundreds of thousands of images per second, beneï¬t signiï¬cantly from lower inference times. | 1608.08710#2 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 3 | There has been a signiï¬cant amount of work on reducing the storage and computation costs by model compression (Le Cun et al. (1989); Hassibi & Stork (1993); Srinivas & Babu (2015); Han et al. (2015); Mariet & Sra (2016)). Recently Han et al. (2015; 2016b) report impressive compression rates on AlexNet (Krizhevsky et al. (2012)) and VGGNet (Simonyan & Zisserman (2015)) by pruning weights with small magnitudes and then retraining without hurting the overall accuracy. However, pruning parameters does not necessarily reduce the computation time since the majority of the parameters removed are from the fully connected layers where the computation cost is low, e.g., the fully connected layers of VGG-16 occupy 90% of the total parameters but only contribute less than 1% of the overall ï¬oating point operations (FLOP). They also demonstrate that the convolutional layers can be compressed and accelerated (Iandola et al. (2016)), but additionally require sparse
âWork done at NEC Labs â Supported in part by the NSF under Grant IIS-13-2079
1
Published as a conference paper at ICLR 2017 | 1608.08710#3 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 4 | âWork done at NEC Labs â Supported in part by the NSF under Grant IIS-13-2079
1
Published as a conference paper at ICLR 2017
BLAS libraries or even specialized hardware (Han et al. (2016a)). Modern libraries that provide speedup using sparse operations over CNNs are often limited (Szegedy et al. (2015a); Liu et al. (2015)) and maintaining sparse data structures also creates an additional storage overhead which can be signiï¬cant for low-precision weights.
Recent work on CNNs have yielded deep architectures with more efï¬cient design (Szegedy et al. (2015a;b); He & Sun (2015); He et al. (2016)), in which the fully connected layers are replaced with average pooling layers (Lin et al. (2013); He et al. (2016)), which reduces the number of parameters signiï¬cantly. The computation cost is also reduced by downsampling the image at an early stage to reduce the size of feature maps (He & Sun (2015)). Nevertheless, as the networks continue to become deeper, the computation costs of convolutional layers continue to dominate. | 1608.08710#4 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 5 | CNNs with large capacity usually have signiï¬cant redundancy among different ï¬lters and feature channels. In this work, we focus on reducing the computation cost of well-trained CNNs by pruning ï¬lters. Compared to pruning weights across the network, ï¬lter pruning is a naturally structured way of pruning without introducing sparsity and therefore does not require using sparse libraries or any specialized hardware. The number of pruned ï¬lters correlates directly with acceleration by reducing the number of matrix multiplications, which is easy to tune for a target speedup. In addition, instead of layer-wise iterative ï¬ne-tuning (retraining), we adopt a one-shot pruning and retraining strategy to save retraining time for pruning ï¬lters across multiple layers, which is critical for pruning very deep networks. Finally, we observe that even for ResNets, which have signiï¬cantly fewer parameters and inference costs than AlexNet or VGGNet, still have about 30% of FLOP reduction without sacriï¬cing too much accuracy. We conduct sensitivity analysis for convolutional layers in ResNets that improves the understanding of ResNets.
# 2 RELATED WORK | 1608.08710#5 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 6 | # 2 RELATED WORK
The early work by Le Cun et al. (1989) introduces Optimal Brain Damage, which prunes weights with a theoretically justiï¬ed saliency measure. Later, Hassibi & Stork (1993) propose Optimal Brain Surgeon to remove unimportant weights determined by the second-order derivative information. Mariet & Sra (2016) reduce the network redundancy by identifying a subset of diverse neurons that does not require retraining. However, this method only operates on the fully-connected layers and introduce sparse connections. | 1608.08710#6 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 7 | To reduce the computation costs of the convolutional layers, past work have proposed to approximate convolutional operations by representing the weight matrix as a low rank product of two smaller matrices without changing the original number of ï¬lters (Denil et al. (2013); Jaderberg et al. (2014); Zhang et al. (2015b;a); Tai et al. (2016); Ioannou et al. (2016)). Other approaches to reduce the convolutional overheads include using FFT based convolutions (Mathieu et al. (2013)) and fast convolution using the Winograd algorithm (Lavin & Gray (2016)). Additionally, quantization (Han et al. (2016b)) and binarization (Rastegari et al. (2016); Courbariaux & Bengio (2016)) can be used to reduce the model size and lower the computation overheads. Our method can be used in addition to these techniques to reduce computation costs without incurring additional overheads. | 1608.08710#7 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 8 | Several work have studied removing redundant feature maps from a well trained network (Anwar et al. (2015); Polyak & Wolf (2015)). Anwar et al. (2015) introduce a three-level pruning of the weights and locate the pruning candidates using particle ï¬ltering, which selects the best combination from a number of random generated masks. Polyak & Wolf (2015) detect the less frequently activated feature maps with sample input data for face detection applications. We choose to analyze the ï¬lter weights and prune ï¬lters with their corresponding feature maps using a simple magnitude based measure, without examining possible combinations. We also introduce network-wide holistic approaches to prune ï¬lters for simple and complex convolutional network architectures. | 1608.08710#8 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 9 | Concurrently with our work, there is a growing interest in training compact CNNs with sparse constraints (Lebedev & Lempitsky| (2016); |Zhou et al. (2016); Wen et al. (2016}). Lebedev & Lempitsky| (2016) leverage group-sparsity on the convolutional filters to achieve structured brain damage, i.e., prune the entries of the convolution kernel in a group-wise fashion. (2016) add group-sparse regularization on neurons during training to learn compact CNNs with reduced filters. [Wen et al-|(2016) add structured sparsity regularizer on each layer to reduce trivial filters, channels or even layers. In the filter-level pruning, all above work use ¢21-norm as a regularizer.
2
Published as a conference paper at ICLR 2017
Similar to the above work, we use ¢;-norm to select unimportant filters and physically prune them. Our fine-tuning process is the same as the conventional training procedure, without introducing additional regularization. Our approach does not introduce extra layer-wise meta-parameters for the regularizer except for the percentage of filters to be pruned, which is directly related to the desired speedup. By employing stage-wise pruning, we can set a single pruning rate for all layers in one stage. | 1608.08710#9 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 11 | Let ni denote the number of input channels for the ith convolutional layer and hi/wi be the height/width of the input feature maps. The convolutional layer transforms the input feature maps xi â RniÃhiÃwi into the output feature maps xi+1 â Rni+1Ãhi+1Ãwi+1, which are used as in- put feature maps for the next convolutional layer. This is achieved by applying ni+1 3D ï¬lters Fi,j â RniÃkÃk on the ni input channels, in which one ï¬lter generates one feature map. Each ï¬lter is composed by ni 2D kernels K â RkÃk (e.g., 3 à 3). All the ï¬lters, together, constitute the kernel matrix Fi â RniÃni+1ÃkÃk. The number of operations of the convolutional layer is ni+1nik2hi+1wi+1. As shown in Figure 1, when a ï¬lter Fi,j is pruned, its corresponding feature map xi+1,j is removed, which reduces nik2hi+1wi+1 operations. The kernels that apply on the removed feature maps from the ï¬lters of the next convolutional | 1608.08710#11 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 14 | 3.1 DETERMINING WHICH FILTERS TO PRUNE WITHIN A SINGLE LAYER
Our method prunes the less useful filters from a well-trained model for computational efficiency while minimizing the accuracy drop. We measure the relative importance of a filter in each layer by calculating the sum of its absolute weights )> |F;,;|, i.e., its ¢;-norm ||F;,;||1. Since the number of input channels, n;, is the same across filters, }> |F;,;| also represents the average magnitude of its kernel weights. This value gives an expectation of the magnitude of the output feature map. Filters with smaller kernel weights tend to produce feature maps with weak activations as compared to the other filters in that layer. Figure [2(a)]illustrates the distribution of filtersâ absolute weights sum for each convolutional layer in a VGG-16 network trained on the CIFAR-10 dataset, where the distribution varies significantly across layers. We find that pruning the smallest filters works better in comparison with pruning the same number of random or largest filters (Section|4.4). Compared to other criteria for activation-based feature map pruning (Section|4.5), we find ¢;-norm is a good criterion for data-free filter selection. | 1608.08710#14 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 15 | The procedure of pruning m ï¬lters from the ith convolutional layer is as follows:
1. For each filter F;,;, calculate the sum of its absolute kernel weights s; = )7/!!, >> |Kil. 2. Sort the filters by sj. 3. Prune m filters with the smallest sum values and their corresponding feature maps. The kernels in the next convolutional layer corresponding to the pruned feature maps are also removed.
>> |Kil.
4. A new kernel matrix is created for both the ith and i + 1th layers, and the remaining kernel weights are copied to the new model.
3
Published as a conference paper at ICLR 2017
(a) Filters are ranked by sj (b) Prune the smallest ï¬lters (c) Prune and retrain
94 CIFARI0, VGG-16, prune smallest filters. retrain 20 epochs a0 % % Fiters Prunea awayi%)
CIFAR-10, VGG-16 conv normalized abs sum of iter weight = conv 13 To) 120380 oa 3 fier index /#fters (6) | 1608.08710#15 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 16 | CIFAR-10, VGG-16 conv normalized abs sum of iter weight = conv 13 To) 120380 oa 3 fier index /#fters (6)
CCIFARIO, VGG-16, pruned smallest filters * conv.2 64 + conv 3 128 + conv4 128 co|[e-* conv_5 256 e* conv.6 256 so|]e-e cony_7 256 © conv.8 512 © conv.9 512 © conv.10 512 © convi1512 2o|{° © conv 12 512 © conv13512 pecuracy 0 Es a0 % Fiters Pruned Awayi%)
Figure 2: (a) Sorting ï¬lters by absolute weights sum for each layer of VGG-16 on CIFAR-10. The x-axis is the ï¬lter index divided by the total number of ï¬lters. The y-axis is the ï¬lter weight sum divided by the max sum value among ï¬lters in that layer. (b) Pruning ï¬lters with the lowest absolute weights sum and their corresponding test accuracies on CIFAR-10. (c) Prune and retrain for each single layer of VGG-16 on CIFAR-10. Some layers are sensitive and it can be harder to recover accuracy after pruning them. | 1608.08710#16 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 17 | Relationship to pruning weights Pruning ï¬lters with low absolute weights sum is similar to pruning low magnitude weights (Han et al. (2015)). Magnitude-based weight pruning may prune away whole ï¬lters when all the kernel weights of a ï¬lter are lower than a given threshold. However, it requires a careful tuning of the threshold and it is difï¬cult to predict the exact number of ï¬lters that will eventually be pruned. Furthermore, it generates sparse convolutional kernels which can be hard to accelerate given the lack of efï¬cient sparse libraries, especially for the case of low-sparsity. | 1608.08710#17 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 18 | Relationship to group-sparse regularization on filters Recent work [Wen] (2016)) apply group-sparse regularization ()'" , ||Fi,j|]2 or ¢2,1-norm) on convolutional filters, which also favor to zero-out filters with small /2-norms, i.e. F;,; = 0. In practice, we do not observe noticeable difference between the /2-norm and the ¢;-norm for filter selection, as the important filters tend to have large values for both measures (Appendi . Zeroing out weights of multiple filters during training has a similar effect to pruning filters with the strategy of iterative pruning and retraining as introduced in SectionB.4]
3.2 DETERMINING SINGLE LAYERâS SENSITIVITY TO PRUNING | 1608.08710#18 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 19 | 3.2 DETERMINING SINGLE LAYERâS SENSITIVITY TO PRUNING
To understand the sensitivity of each layer, we prune each layer independently and evaluate the resulting pruned networkâs accuracy on the validation set. Figure 2(b) shows that layers that maintain their accuracy as ï¬lters are pruned away correspond to layers with larger slopes in Figure 2(a). On the contrary, layers with relatively ï¬at slopes are more sensitive to pruning. We empirically determine the number of ï¬lters to prune for each layer based on their sensitivity to pruning. For deep networks such as VGG-16 or ResNets, we observe that layers in the same stage (with the same feature map size) have a similar sensitivity to pruning. To avoid introducing layer-wise meta-parameters, we use the same pruning ratio for all layers in the same stage. For layers that are sensitive to pruning, we prune a smaller percentage of these layers or completely skip pruning them.
# 3.3 PRUNING FILTERS ACROSS MULTIPLE LAYERS | 1608.08710#19 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 20 | # 3.3 PRUNING FILTERS ACROSS MULTIPLE LAYERS
We now discuss how to prune ï¬lters across the network. Previous work prunes the weights on a layer by layer basis, followed by iteratively retraining and compensating for any loss of accuracy (Han et al. (2015)). However, understanding how to prune ï¬lters of multiple layers at once can be useful: 1) For deep networks, pruning and retraining on a layer by layer basis can be extremely time-consuming 2) Pruning layers across the network gives a holistic view of the robustness of the network resulting in a smaller network 3) For complex networks, a holistic approach may be necessary. For example, for the ResNet, pruning the identity feature maps or the second layer of each residual block results in additional pruning of other layers.
To prune ï¬lters across multiple layers, we consider two strategies for layer-wise ï¬lter selection:
4
Published as a conference paper at ICLR 2017
⢠Independent pruning determines which ï¬lters should be pruned at each layer independent of other layers.
⢠Greedy pruning accounts for the ï¬lters that have been removed in the previous layers. This strategy does not consider the kernels for the previously pruned feature maps while calculating the sum of absolute weights. | 1608.08710#20 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 21 | Figure 3 illustrates the difference between two approaches in calculating the sum of absolute weights. The greedy approach, though not globally optimal, is holistic and results in pruned networks with higher accuracy especially when many ï¬lters are pruned.
Xi+] Xi4qo N42
Figure 3: Pruning ï¬lters across consecutive layers. The independent pruning strategy calculates the ï¬lter sum (columns marked in green) without considering feature maps removed in previous layer (shown in blue), so the kernel weights marked in yellow are still included. The greedy pruning strategy does not count kernels for the already pruned feature maps. Both approaches result in a (ni+1 â 1) à (ni+2 â 1) kernel matrix.
projection shortcut ie Xi U Xi+1 X42 » residual block . P(x)
Figure 4: Pruning residual blocks with the projection shortcut. The ï¬lters to be pruned for the second layer of the residual block (marked as green) are determined by the pruning result of the shortcut projection. The ï¬rst layer of the residual block can be pruned without restrictions. | 1608.08710#21 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 22 | For simpler CNNs like VGGNet or AlexNet, we can easily prune any of the ï¬lters in any convolutional layer. However, for complex network architectures such as Residual networks (He et al. (2016)), pruning ï¬lters may not be straightforward. The architecture of ResNet imposes restrictions and the ï¬lters need to be pruned carefully. We show the ï¬lter pruning for residual blocks with projection mapping in Figure 4. Here, the ï¬lters of the ï¬rst layer in the residual block can be arbitrarily pruned, as it does not change the number of output feature maps of the block. However, the correspondence between the output feature maps of the second convolutional layer and the identity feature maps makes it difï¬cult to prune. Hence, to prune the second convolutional layer of the residual block, the corresponding projected feature maps must also be pruned. Since the identical feature maps are more important than the added residual maps, the feature maps to be pruned should be determined by the pruning results of the shortcut layer. To determine which identity feature maps are to be pruned, we use the same selection criterion based on the | 1608.08710#22 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 24 | # 3.4 RETRAINING PRUNED NETWORKS TO REGAIN ACCURACY
After pruning the ï¬lters, the performance degradation should be compensated by retraining the network. There are two strategies to prune the ï¬lters across multiple layers:
5
Published as a conference paper at ICLR 2017
1. Prune once and retrain: Prune ï¬lters of multiple layers at once and retrain them until the original accuracy is restored. 2. Prune and retrain iteratively: Prune ï¬lters layer by layer or ï¬lter by ï¬lter and then retrain iteratively. The model is retrained before pruning the next layer for the weights to adapt to the changes from the pruning process. | 1608.08710#24 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 25 | We ï¬nd that for the layers that are resilient to pruning, the prune and retrain once strategy can be used to prune away signiï¬cant portions of the network and any loss in accuracy can be regained by retraining for a short period of time (less than the original training time). However, when some ï¬lters from the sensitive layers are pruned away or large portions of the networks are pruned away, it may not be possible to recover the original accuracy. Iterative pruning and retraining may yield better results, but the iterative process requires many more epochs especially for very deep networks.
# 4 EXPERIMENTS | 1608.08710#25 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 26 | We prune two types of networks: simple CNNs (VGG-16 on CIFAR-10) and Residual networks (ResNet-56/110 on CIFAR-10 and ResNet-34 on ImageNet). Unlike AlexNet or VGG (on ImageNet) that are often used to demonstrate model compression, both VGG (on CIFAR-10) and Residual networks have fewer parameters in the fully connected layers. Hence, pruning a large percentage of parameters from these networks is challenging. We implement our ï¬lter pruning method in Torch7 (Collobert et al. (2011)). When ï¬lters are pruned, a new model with fewer ï¬lters is created and the remaining parameters of the modiï¬ed layers as well as the unaffected layers are copied into the new model. Furthermore, if a convolutional layer is pruned, the weights of the subsequent batch normalization layer are also removed. To get the baseline accuracies for each network, we train each model from scratch and follow the same pre-processing and hyper-parameters as ResNet (He et al. (2016)). For retraining, we use a constant learning rate 0.001 and retrain 40 epochs for | 1608.08710#26 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 29 | Model VGG-16 VGG-16-pruned-A VGG-16-pruned-A scratch-train ResNet-56 ResNet-56-pruned-A ResNet-56-pruned-B ResNet-56-pruned-B scratch-train ResNet-110 ResNet-110-pruned-A ResNet-110-pruned-B ResNet-110-pruned-B scratch-train ResNet-34 ResNet-34-pruned-A ResNet-34-pruned-B ResNet-34-pruned-C Error(%) 6.75 6.60 6.88 6.96 6.90 6.94 8.69 6.47 6.45 6.70 7.06 26.77 27.44 27.83 27.52 FLOP 3.13 Ã 108 2.06 Ã 108 1.25 Ã 108 1.12 Ã 108 9.09 Ã 107 2.53 Ã 108 2.13 Ã 108 1.55 Ã 108 3.64 Ã 109 3.08 Ã 109 2.76 Ã 109 3.37 Ã 109 Pruned % Parameters 1.5 Ã 107 5.4 Ã 106 34.2% 10.4% 27.6% 8.5 Ã 105 7.7 Ã 105 7.3 Ã 105 15.9% 38.6% 1.72 Ã 106 1.68 Ã 106 | 1608.08710#29 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 31 | # 4.1 VGG-16 ON CIFAR-10
VGG-16 is a high-capacity network originally designed for the ImageNet dataset (Simonyan & Zisserman (2015)). Recently, Zagoruyko (2015) applies a slightly modiï¬ed version of the model on CIFAR-10 and achieves state of the art results. As shown in Table 2, VGG-16 on CIFAR-10 consists of 13 convolutional layers and 2 fully connected layers, in which the fully connected layers do not occupy large portions of parameters due to the small input size and less hidden units. We use the model described in Zagoruyko (2015) but add Batch Normalization (Ioffe & Szegedy (2015))
6
Published as a conference paper at ICLR 2017
Table 2: VGG-16 on CIFAR-10 and the pruned model. The last two columns show the number of feature maps and the reduced percentage of FLOP from the pruned model. #Maps 32 64 128 128 256 256 256 256 256 256 256 256 256 512 10 | 1608.08710#31 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 32 | layer type wi à hi 32 à 32 Conv 1 32 à 32 Conv 2 16 à 16 Conv 3 16 à 16 Conv 4 8 à 8 Conv 5 8 à 8 Conv 6 8 à 8 Conv 7 4 à 4 Conv 8 4 à 4 Conv 9 4 à 4 Conv 10 2 à 2 Conv 11 2 à 2 Conv 12 2 à 2 Conv 13 1 Linear Linear 1 Total #Maps 64 64 128 128 256 256 256 512 512 512 512 512 512 512 10 FLOP 1.8E+06 3.8E+07 1.9E+07 3.8E+07 1.9E+07 3.8E+07 3.8E+07 1.9E+07 3.8E+07 3.8E+07 9.4E+06 9.4E+06 9.4E+06 2.6E+05 5.1E+03 3.1E+08 #Params 1.7E+03 3.7E+04 7.4E+04 1.5E+05 2.9E+05 5.9E+05 5.9E+05 1.2E+06 2.4E+06 2.4E+06 2.4E+06 2.4E+06 2.4E+06 2.6E+05 5.1E+03 | 1608.08710#32 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 35 | As shown in Figure 2(b), each of the convolutional layers with 512 feature maps can drop at least 60% of ï¬lters without affecting the accuracy. Figure 2(c) shows that with retraining, almost 90% of the ï¬lters of these layers can be safely removed. One possible explanation is that these ï¬lters operate on 4 à 4 or 2 à 2 feature maps, which may have no meaningful spatial connections in such small dimensions. For instance, ResNets for CIFAR-10 do not perform any convolutions for feature maps below 8 à 8 dimensions. Unlike previous work (Zeiler & Fergus (2014); Han et al. (2015)), we observe that the ï¬rst layer is robust to pruning as compared to the next few layers. This is possible for a simple dataset like CIFAR-10, on which the model does not learn as much useful ï¬lters as on ImageNet (as shown in Figure. 5). Even when 80% of the ï¬lters from the ï¬rst layer are pruned, the number of remaining ï¬lters (12) is still larger than the number of raw input channels. However, when removing 80% | 1608.08710#35 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 36 | layer are pruned, the number of remaining ï¬lters (12) is still larger than the number of raw input channels. However, when removing 80% ï¬lters from the second layer, the layer corresponds to a 64 to 12 mapping, which may lose signiï¬cant information from previous layers, thereby hurting the accuracy. With 50% of the ï¬lters being pruned in layer 1 and from 8 to 13, we achieve 34% FLOP reduction for the same accuracy. | 1608.08710#36 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 37 | Figure 5: Visualization of filters in the first convolutional layer of VGG-16 trained on CIFAR-10. Filters are ranked by ¢;-norm.
4.2 RESNET-56/110 ON CIFAR-10
ResNets for CIFAR-10 have three stages of residual blocks for feature maps with sizes of 32 à 32, 16 à 16 and 8 à 8. Each stage has the same number of residual blocks. When the number of feature maps increases, the shortcut layer provides an identity mapping with an additional zero padding for the increased dimensions. Since there is no projection mapping for choosing the identity feature maps, we only consider pruning the ï¬rst layer of the residual block. As shown in Figure 6, most of the layers are robust to pruning. For ResNet-110, pruning some single layers without retraining even
7
Published as a conference paper at ICLR 2017 | 1608.08710#37 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 38 | CIFARLO, ResNet-56, prune smallest filters CIFARLO, ResNet-56, prune smallest filters CIFARLO, ResNet-56, prune smallest filters > conv 216 => conv 20 32 S =" |[e= conv 38 64 EF Je conva0 64 Z| conva2 64}, " Me conv aa 64| ©-© conv 1016 © conv.2832 e2 conv_46 64}, e+ conv.12 16 i e+ conv_30 32 e+ conv_43 64 90}] e© conv.14 16 : 90}| © cony_32 32 90}] e-© conv 064 © conv.1616 â 2 conv 3432 2 conv 5264 2 conv_1816 ' 2 conv 3632 2 conv 5464 5 7 ry cy 6 Too 7 ry cy % Too 7 ry cy % Too Fiters Prune away) Fiters Prune away) Fiters Prune away) Pe CIFARLO, ResNet-110, prune smallest filters Pe CIFARLO, ResNet-110, prune smallest filters Pe CIFAR1O, ResNet-110, prune smallest filters e* conv 38 32 cony_40 32 cony_46 32 cony_48 32 conv 10 16 conv 12 16 2 conv1a16|| > | 1608.08710#38 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 39 | prune smallest filters e* conv 38 32 cony_40 32 cony_46 32 cony_48 32 conv 10 16 conv 12 16 2 conv1a16|| > conv.5032|] 3 5 convasis|| = conv.s232|| © eu conv1816|| 2 \ convi5432|| 2 conv_20 16 conv. 56 32 conv 24 16 cony_26 16 cony_28 16 13016 13216 [yp o_o cony_34 16 Fiters Prunea Awayis) | © * conv_36 16 conv_60 32 conv_62 32 conv 64 32 v6 32 V6B32H4o conv_70 32 # conv 72.32 c z io Fikes Praned Away(%)| © ® conv_106 64 # conv 108 64 c Fiters Praned Away(%) | © | 1608.08710#39 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 40 | CIFARLO, ResNet-56, prune smallest filters > conv 216 ©-© conv 1016 e+ conv.12 16 i 90}] e© conv.14 16 : © conv.1616 â 2 conv_1816 ' 5 7 ry cy 6 Too Fiters Prune away)
CIFARLO, ResNet-56, prune smallest filters => conv 20 32 S " © conv.2832 e+ conv_30 32 90}| © cony_32 32 2 conv 3432 2 conv 3632 7 ry cy % Too Fiters Prune away)
CIFARLO, ResNet-56, prune smallest filters =" |[e= conv 38 64 EF Je conva0 64 Z| conva2 64}, Me conv aa 64| e2 conv_46 64}, e+ conv_43 64 90}] e-© conv 064 2 conv 5264 2 conv 5464 7 ry cy % Too Fiters Prune away)
Pe CIFARLO, ResNet-110, prune smallest filters conv 10 16 conv 12 16 2 conv1a16|| 5 convasis|| eu conv1816|| conv_20 16 conv 24 16 cony_26 16 cony_28 16 13016 13216 [yp cony_34 16 * conv_36 16 c Fiters Praned Away(%) | © | 1608.08710#40 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 41 | Pe CIFARLO, ResNet-110, prune smallest filters e* conv 38 32 cony_40 32 cony_46 32 cony_48 32 > conv.5032|] = conv.s232|| 2 \ convi5432|| conv. 56 32 o_o Fiters Prunea Awayis) | © conv_60 32 conv_62 32 conv 64 32 v6 32 V6B32H4o conv_70 32 # conv 72.32
Pe CIFAR1O, ResNet-110, prune smallest filters 3 © 2 c z io Fikes Praned Away(%)| © ® conv_106 64 # conv 108 64
Figure 6: Sensitivity to pruning for the ï¬rst layer of each residual block of ResNet-56/110.
improves the performance. In addition, we ï¬nd that layers that are sensitive to pruning (layers 20, 38 and 54 for ResNet-56, layer 36, 38 and 74 for ResNet-110) lie at the residual blocks close to the layers where the number of feature maps changes, e.g., the ï¬rst and the last residual blocks for each stage. We believe this happens because the precise residual errors are necessary for the newly added empty feature maps. | 1608.08710#41 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 42 | The retraining performance can be improved by skipping these sensitive layers. As shown in Table 1, ResNet-56-pruned-A improves the performance by pruning 10% ï¬lters while skipping the sensitive layers 16, 20, 38 and 54. In addition, we ï¬nd that deeper layers are more sensitive to pruning than layers in the earlier stages of the network. Hence, we use a different pruning rate for each stage. We use pi to denote the pruning rate for layers in the ith stage. ResNet-56-pruned-B skips more layers (16, 18, 20, 34, 38, 54) and prunes layers with p1=60%, p2=30% and p3=10%. For ResNet-110, the ï¬rst pruned model gets a slightly better result with p1=50% and layer 36 skipped. ResNet-110-pruned-B skips layers 36, 38, 74 and prunes with p1=50%, p2=40% and p3=30%. When there are more than two residual blocks at each stage, the middle residual blocks may be redundant and can be easily pruned. This might explain why ResNet-110 is easier to prune than ResNet-56.
4.3 RESNET-34 ON ILSVRC2012 | 1608.08710#42 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 43 | 4.3 RESNET-34 ON ILSVRC2012
ResNets for ImageNet have four stages of residual blocks for feature maps with sizes of 56 à 56, 28 à 28, 14 à 14 and 7 à 7. ResNet-34 uses the projection shortcut when the feature maps are down-sampled. We ï¬rst prune the ï¬rst layer of each residual block. Figure 7 shows the sensitivity of the ï¬rst layer of each residual block. Similar to ResNet-56/110, the ï¬rst and the last residual blocks of each stage are more sensitive to pruning than the intermediate blocks (i.e., layers 2, 8, 14, 16, 26, 28, 30, 32). We skip those layers and prune the remaining layers at each stage equally. In Table 1 we compare two conï¬gurations of pruning percentages for the ï¬rst three stages: (A) p1=30%, p2=30%, p3=30%; (B) p1=50%, p2=60%, p3=40%. Option-B provides 24% FLOP reduction with about 1% loss in accuracy. As seen in the pruning results for ResNet-50/110, we can predict that ResNet-34 is relatively more difï¬cult to prune as compared to deeper ResNets. | 1608.08710#43 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 44 | We also prune the identity shortcuts and the second convolutional layer of the residual blocks. As these layers have the same number of ï¬lters, they are pruned equally. As shown in Figure 7(b), these layers are more sensitive to pruning than the ï¬rst layers. With retraining, ResNet-34-pruned-C prunes the third stage with p3=20% and results in 7.5% FLOP reduction with 0.75% loss in accuracy. Therefore, pruning the ï¬rst layer of the residual block is more effective at reducing the overall FLOP
8
Published as a conference paper at ICLR 2017
8 ImageNet, ResNet-34, prune smallest filters conv_2 64 conv_4 64 70 conv_6 64 conv_8 128 conv_10 128 conv_12 128 conv_14 128 conv_16 256 conv_18 256 conv_20 256 conv_22 256 conv_24 256 conv_26 256 conv_28 512 conv_30 512 0 20 40 60 30 * conv_32 512 [4p Filters Pruned Away(%) âAccuracy 55
(a) Pruning the ï¬rst layer of residual blocks (b) Pruning the second layer of residual blocks | 1608.08710#44 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 45 | (a) Pruning the ï¬rst layer of residual blocks (b) Pruning the second layer of residual blocks
ImageNet, ResNet-34, prune the second layer of the basicblock 70 o* 1-7, step=2 ee 9-15, step=2 60 © 17-27, step=2 e+ 29 - 33, step=2 a °o 20 40 Cr) Too Parameter Pruned Away(%) Test Accuracy
Figure 7: Sensitivity to pruning for the residual blocks of ResNet-34.
than pruning the second layer. This ï¬nding also correlates with the bottleneck block design for deeper ResNets, which ï¬rst reduces the dimension of input feature maps for the residual layer and then increases the dimension to match the identity mapping.
# 4.4 COMPARISON WITH PRUNING RANDOM FILTERS AND LARGEST FILTERS | 1608.08710#45 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 46 | # 4.4 COMPARISON WITH PRUNING RANDOM FILTERS AND LARGEST FILTERS
We compare our approach with pruning random filters and largest filters. As shown in Figure [8] pruning the smallest filters outperforms pruning random filters for most of the layers at different pruning ratios. For example, smallest filter pruning has better accuracy than random filter pruning for all layers with the pruning ratio of 90%. The accuracy of pruning filters with the largest ¢;-norms drops quickly as the pruning ratio increases, which indicates the importance of filters with larger ¢,-norms.
100 GIFAR10, VGG-16, prune fiters with smallest f-norm ot CIFAR10, VGG-16, prune random filters CIFAR1O, VGG-16, prune fiters with largest /,-norm = con 166 es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 pecuracy 0 Ea Co Too 0 3 Cy Too 0 w Co a0 % a0 % a0 % Fits Pruned Awayit) Fites Pred Awayit) Fits Pruned Awayit) | 1608.08710#46 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 47 | 100 GIFAR10, VGG-16, prune fiters with smallest f-norm = con 166 es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 0 Ea Co Too a0 % Fits Pruned Awayit)
ot CIFAR10, VGG-16, prune random filters pecuracy 0 3 Cy Too a0 % Fites Pred Awayit)
CIFAR1O, VGG-16, prune fiters with largest /,-norm 0 w Co Too a0 % Fits Pruned Awayit)
Figure 8: Comparison of three pruning methods for VGG-16 on CIFAR-10: pruning the smallest ï¬lters, pruning random ï¬lters and pruning the largest ï¬lters. In random ï¬lter pruning, the order of ï¬lters to be pruned is randomly permuted.
# 4.5 COMPARISON WITH ACTIVATION-BASED FEATURE MAP PRUNING | 1608.08710#47 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 48 | # 4.5 COMPARISON WITH ACTIVATION-BASED FEATURE MAP PRUNING
The activation-based feature map pruning method removes the feature maps with weak activation patterns and their corresponding filters and kernels (Polyak & Wol! )), which needs sample data as input to determine which feature maps to prune. A feature map x;41,; ⬠Râ¢+!*"+1 is generated by applying filter F;,; ⬠R"**â¢* to feature maps of previous layer x; ⬠Râ¢*â'*", ie., Xi41,j = Fi,j * Xi. Given N randomly selected images {x'}}_, from the training set, the statistics of each feature map can be estimated with one epoch forward pass of the N sampled data. Note that we calculate statistics on the feature maps generated from the convolution operations before batch normalization or non-linear activation. We compare our ¢;-norm based filter pruning with feature map pruning using the following criteria: Omean-mean(Xi,j) = + a mean(x?;), Onean-sta(Xij) = Fe hr St d(KM,)s Smean-ts (Kij) = FH Der (XP [la> Gmeanee (Kg) = WH Doras IPxjlle and
9
Published as a conference paper at ICLR 2017 | 1608.08710#48 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 49 | 100 GIFAR10, VGG-16, prune fiters with smallest f-norm yoo CIFAR10, VGG-16, prune feature maps with smallest uns may 1300 CIFARLO. VGG-16. prune feature maps with smallest âeol|*> conv 64] e+ conv2 64 => conv. 68 es conv.2 64 + conv.3 128 + conv.4 128 ee conv.s 256 |\ + conv6 256 | ° e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 oo|[e* conv 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 8 + conv.3 128 + conv.4 128 > lee conv 5 256 £ |[e-* conv_6 256 | |le-e conv_7 256 8 pecurecy 8 8 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, 20) ee conv11512 © conv.12512 © conv.12512 © conv.13512 © conv.13512 0 3 Too 0 Ea Too 0 3 % % % Fiters | 1608.08710#49 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 50 | 20) ee conv11512 © conv.12512 © conv.12512 © conv.13512 © conv.13512 0 3 Too 0 Ea Too 0 3 % % % Fiters Pruned Awayi%) Pruned Awayis) ned wayi%) (a) ||Fi,glla (b) Omean-mean (C) Omean-sta CIFARIO, VGG-16, prune feature maps with smallest run CCIFARIO, VGG-16, prune feature maps with smallest ie CIFAR1O, VGG-16, prune feature maps with smallest ov 109, 109, oo|[e* cont 6m es conv.2 64 = conl 6a es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 oo|[e* conv 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 8 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 pecurecy 8 pecurecy 8 8 os coma 12 oo comr9 512 oo comr9 512 o 3 coneiosi2 o 3 coneiosi2 \ | 1608.08710#50 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 52 | 100 GIFAR10, VGG-16, prune fiters with smallest f-norm âeol|*> conv 64] e+ conv2 64 + conv.3 128 + conv.4 128 > lee conv 5 256 £ |[e-* conv_6 256 | |le-e conv_7 256 8 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 0 3 Too % Fiters Pruned Awayi%)
yoo CIFAR10, VGG-16, prune feature maps with smallest uns may => conv. 68 es conv.2 64 + conv.3 128 + conv.4 128 ee conv.s 256 |\ + conv6 256 | ° e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 8 pecurecy 8 20) ee conv11512 © conv.12512 © conv.13512 0 Ea Too % Pruned Awayis) | 1608.08710#52 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 53 | 1300 CIFARLO. VGG-16. prune feature maps with smallest oo|[e* conv 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 8 0 3 % ned wayi%)
CIFARIO, VGG-16, prune feature maps with smallest run ie oo|[e* cont 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 pecurecy 8 os coma 12 o 3 coneiosi2 20S conv 5i2 o 3 comei2si2 : oo comet3 512 % 20 60 100
CCIFARIO, VGG-16, prune feature maps with smallest 109, = conl 6a es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 8 pecurecy 8 oo comr9 512 o 3 coneiosi2 \ 20S conv 5i2 o 3 comei2si2 oo comet3 512 % 20 60 100 | 1608.08710#53 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 54 | CIFAR1O, VGG-16, prune feature maps with smallest ov 109, oo|[e* conv 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 8 oo comr9 512 o 3 coneiosi2 20S conv 5i2 o 3 comei2si2 oo comet3 512 % 20 60
Figure 9: Comparison of activation-based feature map pruning for VGG-16 on CIFAR-10.
Ovar-to (i,j) = var({||x?;l]2}NL1), where mean, std and var are standard statistics (average, standard deviation and variance) of the input. Here, o.,2+-¢, 18 the contribution variance of channel criterion proposed in (2015), which is motivated by the intuition that an unimportant feature map has almost similar outputs for the whole training data and acts like an additional bias. | 1608.08710#54 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 55 | The estimation of the criteria becomes more accurate when more sample data is used. Here we use the whole training set (NV = 50,000 for CIFAR-10) to compute the statistics. The performance of feature map pruning with above criteria for each layer is shown in Figure[9] Smallest filter pruning outperforms feature map pruning with the criteria Onean-means Smeanâl;> Tmeanâly ANd Oyar-¢,. The Omean-sta Criterion has better or similar performance to ¢;-norm up to pruning ratio of 60%. However, its performance drops quickly after that especially for layers of conv_1, conv_2 and conv_3. We find £-norm is a good heuristic for filter selection considering that it is data free.
# 5 CONCLUSIONS | 1608.08710#55 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 56 | # 5 CONCLUSIONS
Modern CNNs often have high capacity with large training and inference costs. In this paper we present a method to prune ï¬lters with relatively low weight magnitudes to produce CNNs with reduced computation costs without introducing irregular sparsity. It achieves about 30% reduction in FLOP for VGGNet (on CIFAR-10) and deep ResNets without signiï¬cant loss in the original accuracy. Instead of pruning with speciï¬c layer-wise hayperparameters and time-consuming iterative retraining, we use the one-shot pruning and retraining strategy for simplicity and ease of implementation. By performing lesion studies on very deep CNNs, we identify layers that are robust or sensitive to pruning, which can be useful for further understanding and improving the architectures.
# ACKNOWLEDGMENTS
The authors would like to thank the anonymous reviewers for their valuable feedback.
# REFERENCES
Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Structured Pruning of Deep Convolutional Neural Networks. arXiv preprint arXiv:1512.08571, 2015.
10
Published as a conference paper at ICLR 2017 | 1608.08710#56 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 57 | 10
Published as a conference paper at ICLR 2017
Ronan Collobert, Koray Kavukcuoglu, and Cl´ement Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011.
Matthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.
Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning. In NIPS, 2013.
Song Han, Jeff Pool, John Tran, and William Dally. Learning both Weights and Connections for Efï¬cient Neural Network. In NIPS, 2015.
Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. EIE: Efï¬cient Inference Engine on Compressed Deep Neural Network. In ISCA, 2016a.
Song Han, Huizi Mao, and William J Dally. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. In ICLR, 2016b. | 1608.08710#57 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 58 | Babak Hassibi and David G Stork. Second Order Derivatives for Network Pruning: Optimal Brain Surgeon. In NIPS, 1993.
Kaiming He and Jian Sun. Convolutional Neural Networks at Constrained Time Cost. In CVPR, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016.
Forrest Iandola, Matthew Moskewicz, Khalidand Ashraf, Song Han, William Dally, and Keutzer Kurt. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and ¡ 1MB model size. arXiv preprint arXiv:1602.07360, 2016.
Yani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Training CNNs with Low-Rank Filters for Efï¬cient Image Classiï¬cation. In ICLR, 2016.
Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. 2015.
Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. In BMVC, 2014. | 1608.08710#58 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 59 | Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. In BMVC, 2014.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet Classiï¬cation with Deep Convo- lutional Neural Networks. In NIPS, 2012.
Andrew Lavin and Scott Gray. Fast Algorithms for Convolutional Neural Networks. In CVPR, 2016.
Yann Le Cun, John S Denker, and Sara A Solla. Optimal Brain Damage. In NIPS, 1989.
Vadim Lebedev and Victor Lempitsky. Fast Convnets Using Group-wise Brain Damage. In CVPR, 2016.
Min Lin, Qiang Chen, and Shuicheng Yan. Network in Network. arXiv preprint arXiv:1312.4400, 2013.
Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse Convolu- tional Neural Networks. In CVPR, 2015.
Zelda Mariet and Suvrit Sra. Diversity Networks. In ICLR, 2016. | 1608.08710#59 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 60 | Zelda Mariet and Suvrit Sra. Diversity Networks. In ICLR, 2016.
Michael Mathieu, Mikael Henaff, and Yann LeCun. Fast Training of Convolutional Networks through FFTs. arXiv preprint arXiv:1312.5851, 2013.
Adam Polyak and Lior Wolf. Channel-Level Acceleration of Deep Face Representations. IEEE Access, 2015.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks. In ECCV, 2016.
11
Published as a conference paper at ICLR 2017
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR, 2015.
Suraj Srinivas and R Venkatesh Babu. Data-free Parameter Pruning for Deep Neural Networks. In BMVC, 2015. | 1608.08710#60 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 61 | Suraj Srinivas and R Venkatesh Babu. Data-free Parameter Pruning for Deep Neural Networks. In BMVC, 2015.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overï¬tting. JMLR, 2014.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going Deeper with Convolutions. In CVPR, 2015a.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethink- ing the Inception Architecture for Computer Vision. arXiv preprint arXiv:1512.00567, 2015b.
Cheng Tai, Tong Xiao, Xiaogang Wang, and Weinan E. Convolutional neural networks with low-rank regularization. In ICLR, 2016.
Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning Structured Sparsity in Deep Learning. In NIPS, 2016. | 1608.08710#61 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 62 | Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning Structured Sparsity in Deep Learning. In NIPS, 2016.
Sergey Zagoruyko. 92.45% on CIFAR-10 in Torch. http://torch.ch/blog/2015/07/30/ cifar.html, 2015.
Matthew D Zeiler and Rob Fergus. Visualizing and Understanding Convolutional Networks. In ECCV, 2014.
Xiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating Very Deep Convolutional Networks for Classiï¬cation and Detection. IEEE T-PAMI, 2015a.
Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. Efï¬cient and accurate approximations of nonlinear convolutional networks. In CVPR, 2015b.
Hao Zhou, Jose Alvarez, and Fatih Porikli. Less Is More: Towards Compact CNNs. In ECCV, 2016.
12
Published as a conference paper at ICLR 2017
6 APPENDIX
6.1 COMPARISON WITH £2-NORM BASED FILTER PRUNING | 1608.08710#62 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 64 | CIFAR10, VGG-16, prune filters with smallest f-norm CIFAR10, VGG-16, prune filters with smallest fy-norm 109, 109, + conv_164 + conv_2 64 + conv_3 128 + conv_4128 ee conv_5 256 e* conv_6 256 ee conv_7 256 © -* conv_8 512 © -* conv_9 512 © conv_10512 © conv_11512 © -© conv_12 512 © -* conv_13 512 80 + conv_2 64 + conv_3 128 + conv_4128 ee conv_5 256 e* conv_6 256 ee conv_7 256 © -* conv_8 512 © -* conv_9 512 © conv_10512 © conv_11512 © -© conv_12 512 © -* conv_13 512 60 Accuracy Accuracy 20 0 20 a0 60 30 100 0 20 a0 60 30 100 Filters Pruned Away(94) Filters Pruned Away(%) (a) ||Faslla (b) ||Fi,sll2 | 1608.08710#64 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 65 | CIFAR10, VGG-16, prune filters with smallest f-norm 109, + conv_164 + conv_2 64 + conv_3 128 + conv_4128 ee conv_5 256 e* conv_6 256 ee conv_7 256 © -* conv_8 512 © -* conv_9 512 © conv_10512 © conv_11512 © -© conv_12 512 © -* conv_13 512 80 60 Accuracy 20 0 20 a0 60 30 100 Filters Pruned Away(94)
CIFAR10, VGG-16, prune filters with smallest fy-norm 109, + conv_2 64 + conv_3 128 + conv_4128 ee conv_5 256 e* conv_6 256 ee conv_7 256 © -* conv_8 512 © -* conv_9 512 © conv_10512 © conv_11512 © -© conv_12 512 © -* conv_13 512 Accuracy 0 20 a0 60 30 100 Filters Pruned Away(%)
Figure 10: Comparison of ¢;-norm and ¢3-norm based filter pruning for VGG-16 on CIFAR-10.
6.2 FLOP AND WALL-CLOCK TIME | 1608.08710#65 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 66 | 6.2 FLOP AND WALL-CLOCK TIME
FLOP is a commonly used measure to compare the computation complexities of CNNs. It is easy to compute and can be done statically, which is independent of the underlying hardware and software implementations. Since we physically prune the ï¬lters by creating a smaller model and then copy the weights, there are no masks or sparsity introduced to the original dense BLAS operations. Therefore the FLOP and wall-clock time of the pruned model is the same as creating a model with smaller number of ï¬lters from scratch.
We report the inference time of the original model and the pruned model on the test set of CIFAR-10 and the validation set of ILSVRC 2012, which contains 10,000 32 Ã 32 images and 50,000 224 Ã 224 images respectively. The ILSVRC 2012 dataset is used only for ResNet-34. The evaluation is conducted in Torch7 with Titan X (Pascal) GPU and cuDNN v5.1, using a mini-batch size 128. As shown in Table 3, the saved inference time is close to the FLOP reduction. Note that the FLOP number only considers the operations in the Conv and FC layers, while some calculations such as Batch Normalization and other overheads are not accounted.
# Table 3: The reduction of FLOP and wall-clock time for inference. | 1608.08710#66 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08710 | 67 | # Table 3: The reduction of FLOP and wall-clock time for inference.
FLOP Model 3.13 Ã 108 VGG-16 2.06 Ã 108 VGG-16-pruned-A 1.25 Ã 108 ResNet-56 9.09 Ã 107 ResNet-56-pruned-B 2.53 Ã 108 ResNet-110 ResNet-110-pruned-B 1.55 Ã 108 3.64 Ã 109 ResNet-34 2.76 Ã 109 ResNet-34-pruned-B Pruned % Time (s) 34.2% 27.6% 38.6% 24.2% 1.23 0.73 1.31 0.99 2.38 1.86 36.02 22.93 Saved % 40.7% 24.4% 21.8% 28.0%
13 | 1608.08710#67 | Pruning Filters for Efficient ConvNets | The success of CNNs in various applications is accompanied by a significant
increase in the computation and parameter storage costs. Recent efforts toward
reducing these overheads involve pruning and compressing the weights of various
layers without hurting original accuracy. However, magnitude-based pruning of
weights reduces a significant number of parameters from the fully connected
layers and may not adequately reduce the computation costs in the convolutional
layers due to irregular sparsity in the pruned networks. We present an
acceleration method for CNNs, where we prune filters from CNNs that are
identified as having a small effect on the output accuracy. By removing whole
filters in the network together with their connecting feature maps, the
computation costs are reduced significantly. In contrast to pruning weights,
this approach does not result in sparse connectivity patterns. Hence, it does
not need the support of sparse convolution libraries and can work with existing
efficient BLAS libraries for dense matrix multiplications. We show that even
simple filter pruning techniques can reduce inference costs for VGG-16 by up to
34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the
original accuracy by retraining the networks. | http://arxiv.org/pdf/1608.08710 | Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf | cs.CV, cs.LG | Published as a conference paper at ICLR 2017 | null | cs.CV | 20160831 | 20170310 | [
{
"id": "1602.07360"
},
{
"id": "1512.00567"
},
{
"id": "1512.08571"
},
{
"id": "1602.02830"
}
] |
1608.08614 | 0 | 6 1 0 2 c e D 0 1
] V C . s c [ 2 v 4 1 6 8 0 . 8 0 6 1 : v i X r a
# What makes ImageNet good for transfer learning?
# Pulkit Agrawal Berkeley Artiï¬cial Intelligence Research (BAIR) Laboratory UC Berkeley {minyoung,pulkitag,aaefros}@berkeley.edu
# Abstract
The tremendous success of ImageNet-trained deep fea- tures on a wide range of transfer tasks raises the question: what is it about the ImageNet dataset that makes the learnt features as good as they are? This work provides an em- pirical investigation into the various facets of this question, such as, looking at the importance of the amount of exam- ples, number of classes, balance between images-per-class and classes, and the role of ï¬ne and coarse grained recog- nition. We pre-train CNN features on various subsets of the ImageNet dataset and evaluate transfer performance on a variety of standard vision tasks. Our overall ï¬ndings sug- gest that most changes in the choice of pre-training data long thought to be critical, do not signiï¬cantly affect trans- fer performance.
# 1. Introduction | 1608.08614#0 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 1 | # 1. Introduction
the dataset (1.2 million labeled images) that forces the rep- resentation to be general. Others argue that it is the large number of distinct object classes (1000), which forces the network to learn a hierarchy of generalizable features. Yet others believe that the secret sauce is not just the large num- ber of classes, but the fact that many of these classes are visually similar (e.g. many different breeds of dogs), turn- ing this into a ï¬ne-grained recognition task and pushing the representation to âwork harderâ. But, while almost every- one in computer vision seems to have their own opinion on this hot topic, little empirical evidence has been produced so far.
In this work, we systematically investigate which as- pects of the ImageNet task are most critical for learning good general-purpose features. We evaluate the features by ï¬ne-tuning on three tasks: object detection on PASCAL- VOC 2007 dataset (PASCAL-DET), action classiï¬cation on PASCAL-VOC 2012 dataset (PASCAL-ACT-CLS) and scene classiï¬cation on the SUN dataset (SUN-CLS); see Section 3 for more details. | 1608.08614#1 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 2 | It has become increasingly common within the com- puter vision community to treat image classiï¬cation on Im- ageNet [35] not as an end in itself, but rather as a âpre- text taskâ for training deep convolutional neural networks (CNNs [25, 22]) to learn good general-purpose features. This practice of ï¬rst training a CNN to perform image clas- siï¬cation on ImageNet (i.e. pre-training) and then adapting these features for a new target task (i.e. ï¬ne-tuning) has be- come the de facto standard for solving a wide range of com- puter vision problems. Using ImageNet pre-trained CNN features, impressive results have been obtained on several image classiï¬cation datasets [10, 33], as well as object de- tection [12, 37], action recognition [38], human pose esti- mation [6], image segmentation [7], optical ï¬ow [42], im- age captioning [9, 19] and others [24].
Given the success of ImageNet pre-trained CNN fea- tures, it is only natural to ask: what is it about the ImageNet dataset that makes the learnt features as good as they are? One school of thought believes that it is the sheer size of | 1608.08614#2 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 3 | The paper is organized as a set of experiments answering a list of key questions about feature learning with ImageNet. The following is a summary of our main ï¬ndings:
1. How many pre-training ImageNet examples are sufï¬cient for transfer learning? Pre-training with only half the Im- ageNet data (500 images per class instead of 1000) results in only a small drop in transfer learning performance (1.5 mAP drop on PASCAL-DET). This drop is much smaller than the drop on the ImageNet classiï¬cation task itself. See Section 4 and Figure 1 for details.
2. How many pre-training ImageNet classes are sufï¬cient for transfer learning? Pre-training with an order of mag- nitude fewer classes (127 classes instead of 1000) results in only a small drop in transfer learning performance (2.8 mAP drop on PASCAL-DET). Curiously, we also found that for some transfer tasks, pre-training with fewer classes leads to better performance. See Section 5.1 and Figure 2 for details.
1 | 1608.08614#3 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 4 | 1
S a x 2 0.6 0.6 9 & < 3 05 05 2 2 c & 2 2 04 04 2 E âficati % = -® SUN - Classification & > 03 -@ PASCAL - Object Detection 9-3 2% © - - 5 02 =® PASCAL - Action Recognition 02 5 g -@ |mageNet - Classification Z < 4 01 01 6 piu] ov 3 0 200 400 600 800 1000 2 Number of Pretraining Images Per ImageNet Class
Figure 1: Change in transfer task performance of a CNN pre-trained with varying number of images per ImageNet class. The left y-axis is the mean class accuracy used for SUN and ImageNet CLS. The right y-axis measures mAP for PASCAL DET and ACTION-CLS. The number of examples per class are reduced by random sam- pling. Accuracy on the ImageNet classiï¬cation task increases faster as compared to performance on transfer tasks.
3. How important is ï¬ne-grained recognition for learning good features for transfer learning? Features pre-trained with a subset of ImageNet classes that do not require ï¬ne- grained discrimination still demonstrate good transfer per- formance. See Section 5.2 and Figure 2 for details. | 1608.08614#4 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 5 | 4. Does pre-training on coarse classes produce features ca- pable of ï¬ne-grained recognition (and vice versa) on Ima- geNet itself? We found that a CNN trained to classify only between the 127 coarse ImageNet classes produces fea- tures capable of telling apart ï¬ne-grained ImageNet classes whose labels it has never seen in training (section 5.3). Likewise, a CNN trained to classify the 1000 ImageNet classes is able to distinguish between unseen coarse-level classes higher up in the WordNet hierarchy (section 5.4).
5. Given the same budget of pre-training images, should we have more classes or more images per class? Training with fewer classes but more images per class performs slightly better at transfer tasks than training with more classes but fewer images per class. See Section 5.5 and Table 2 for details.
6. Is more data always helpful? We found that training with 771 ImageNet classes (out of 1000) that exclude all PAS- CAL VOC classes, achieves nearly the same performance on PASCAL-DET as training on complete ImageNet. Fur- ther experiments conï¬rm that blindly adding more training data does not always lead to better performance and can sometimes hurt performance. See Section 6, and Table 9 for more details.
2 | 1608.08614#5 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 6 | 2
0.6 0.5 0.4 Class Accuracy ( ImageNet & SUN ) Mean Average Precision ( PASCAL ) 0.3 = SUN - Classification 0.3 =@ PASCAL - Object Detection 0.2 -® PASCAL - Action Recognition 9-2 0.1 -@ |mageNet - Classification 0.1 0 200 400 600 800 1000 Number of Pretraining ImageNet Classes
Figure 2: Change in transfer task performance with varying number of pre-training ImageNet classes. The number of ImageNet classes are varied using the technique described in Section 5.1. With only 486 pre-training classes, transfer performances are unaffected and only a small drop is observed when only 79 classes are used for pre- training. The ImageNet classiï¬cation performance is measured by ï¬ntetuning the last layer to the original 1000-way classiï¬cation.
# 2. Related Work | 1608.08614#6 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 7 | # 2. Related Work
A number of papers have studied transfer learning in CNNs, including the various factors that affect pre-training and ï¬ne-tuning. For example, the question of whether pre- training should be terminated early to prevent over-ï¬tting and what layers should be used for transfer learning was studied by [2, 44]. A thorough investigation of good archi- tectural choices for transfer learning was conducted by [3], while [26] propose an approach to ï¬ne-tuning for new tasks without âforgettingâ the old ones. In contrast to these works, we use a ï¬xed ï¬ne-tuning pr | 1608.08614#7 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 8 | One central downside of supervised pre-training is that large quantity of expensive manually-supervised training data is required. The possibility of using large amounts of unlabelled data for feature learning has therefore been very attractive. Numerous methods for learning features by optimizing some auxiliary criterion of the data itself have been proposed. The most well-known such criteria are image reconstruction [5, 36, 29, 27, 32, 20] (see [4] for a comprehensive overview) and feature slowness [43, 14]. Unfortunately, features learned using these methods turned out not to be competitive with those obtained from super- vised ImageNet pre-training [31]. To try and force better feature generalization, more recent âself-supervisedâ meth- ods use more difï¬cult data prediction auxiliary tasks in an effort to make the CNNs âwork harderâ. Attempted self- supervised tasks include predictions of ego-motion [1, 16], spatial context [8, 31, 28], temporal context [41], and even color [45, 23] and sound [30]. While features learned using these methods often come close to ImageNet performance, to date, none have been able to beat it.
) I ar Label set 1 Original label set Label set 2 | 1608.08614#8 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 9 | ) I ar Label set 1 Original label set Label set 2
Figure 3: An illustration of the bottom up procedure used to con- struct different label sets using the WordNet tree. Each node of the tree represents a class and the leaf nodes are shown in red. Differ- ent label sets are iteratively constructed by clustering together all the leaf nodes with a common parent. In each iteration, only leaf nodes are clustered. This procedure results into a sequence of label sets for 1.2M images, where each consequent set contains labels coarser than the previous one. Because the WordNet tree is im- balanced, even after multiple iterations, label sets contain some classes that are present in the 1000 way ImageNet challenge.
A reasonable middle ground between the expensive, fully-supervised pre-training and free unsupervised pre- training is to use weak supervision. For example, [18] use the YFCC100M dataset of 100 million Flickr images la- beled with noisy user tags as pre-training instead of Ima- geNet. But yet again, even though YFCC100M is almost two orders of magnitude larger than ImageNet, somewhat surprisingly, the resulting features do not appear to give any substantial boost over these pre-trained on ImageNet. | 1608.08614#9 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 10 | Overall, despite keen interest in this problem, alterna- tive methods for learning general-purpose deep features have not managed to outperform ImageNet-supervised pre- training on transfer tasks.
The goal of this work is to try and understand what is the secret to ImageNetâs continuing success.
# 3. Experimental Setup
The process of using supervised learning to initialize CNN parameters using the task of ImageNet classiï¬cation is referred to as pre-training. The process of adapting pre- trained CNN to continuously train on a target dataset is referred to as ï¬netuning. All of our experiments use the Caffe [17] implementation of the a single network architec- ture proposed by Krizhevsky et al. [22]. We refer to this architecture as AlexNet.
We closely follow the experimental setup of Agrawal et al. [2] for evaluating the generalization of pre-trained features on three transfer tasks: PASCAL VOC 2007 ob- ject detection (PASCAL-DET), PASCAL VOC 2012 action recognition (PASCAL-ACT-CLS) and scene classiï¬cation on SUN dataset (SUN-CLS).
⢠For PASCAL-DET, we used the PASCAL VOC 2007 train/val for ï¬netuning using the experimental setup and
3 | 1608.08614#10 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 11 | ⢠For PASCAL-DET, we used the PASCAL VOC 2007 train/val for ï¬netuning using the experimental setup and
3
Pre-trained Dataset Original 127 Classes Random PASCAL 58.3 55.5 41.3 [21] SUN 52.2 48.7 35.7 [2]
Table 1: The transfer performance of a network pre-trained us- ing 127 (coarse) classes obtained after top-down clustering of the WordNet tree is comparable to a transfer performance after ï¬ne- tuning on all 1000 ImageNet classes. This indicates that ï¬ne- grained recognition is not necessary for learning good transferable features.
code provided by Faster-RCNN [34] and report perfor- mance on the test set. Finetuning on PASCAL-DET was performed by adapting the pre-trained convolution layers of AlexNet. The model was trained for 70K iterations using stochastic gradient descent (SGD), with an initial learning rate of 0.001 with a reduction by a factor of 10 at 40K iteration. | 1608.08614#11 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 12 | ⢠For PASCAL-ACT-CLS, we used PASCAL VOC 2012 train/val for ï¬netuning and testing using the experimen- tal setup and code provided by R*CNN [13]. The ï¬ne- tuning process for PASCAL-ACT-CLS mimics the pro- cedure described for PASCAL-DET.
⢠For SUN-CLS we used the same train/val/test splits as used by [2]. Finetuning on SUN was performed by ï¬rst replacing the FC-8 layer in the AlexNet model with a ran- domly initialized, and fully connected layer with 397 out- put units. Finetuning was performed for 50K iterations using SGD with an initial learning rate of 0.001 which was reduced by a factor of 10 every 20K iterations.
Faster-RCNN and R*CNN are known to have variance across training runs; we therefore run it three times and re- port the mean ± standard deviation. On the other hand, [2], reports little variance between runs on SUN-CLS so we re- port our result using a single run. | 1608.08614#12 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 13 | In some experiments we pre-train on ImageNet using a different number of images per class. The model with 1000 images/class uses the original ImageNet ILSVRC 2012 training set. Models with N images/class for N < 1000 are trained by drawing a random sample of N images from all images of that class made available as part of the ImageNet training set.
# 4. How does the amount of pre-training data affect transfer performance?
For answering this question, we trained 5 different AlexNet models from scratch using 50, 125, 250, 500 and 1000 images per each of the 1000 ImageNet classes using the procedure described in Section 3. The variation in per- formance with amount of pre-training data when these mod- els are ï¬netuned for PASCAL-DET, PASCAL-ACT-CLS | 1608.08614#13 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 14 | ⢠Baseline Accuracy So Fo Top 1 Nearest Neighbors Accuracy N 918 Classes 753 Classes 486 Classes 127 Classes 79 Classes 9 Classes (104) (303) (620) (979) (1000) (1000) ° Random (1000) 2 8 ® Induction Accuracy LL mS ms Soe & Top 5 Nearest Neighbors Accuracy 6 ⢠Baseline Accuracy ] | ⢠Induction Accuracy 918 Classes 753 Classes 486 Classes 127 Classes 79Classes 9Classes Random (104) (303) (620) (979) (1000) (1000) (1000) ()
⢠Baseline Accuracy So Fo Top 1 Nearest Neighbors Accuracy N 918 Classes 753 Classes 486 Classes 127 Classes 79 Classes 9 Classes (104) (303) (620) (979) (1000) (1000) ° Random (1000) ® Induction Accuracy LL
2 8 mS ms Soe & Top 5 Nearest Neighbors Accuracy 6 ⢠Baseline Accuracy ] | ⢠Induction Accuracy 918 Classes 753 Classes 486 Classes 127 Classes 79Classes 9Classes Random (104) (303) (620) (979) (1000) (1000) (1000) () | 1608.08614#14 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 15 | Figure 4: Does a CNN trained for discriminating between coarse classes learns a feature embedding capable of distinguishing between ï¬ne classes? We quantiï¬ed this by measuring the induction accuracy deï¬ned as following: after training a feature embedding for a particular set of classes (set A), the induction accuracy is the nearest neighbor (top-1 and top-5) classiï¬cation accuracy measured in the FC8 feature space of the subset of 1000 ImageNet classes not present in set A. The syntax on the x-axis A Classes(B) indicates that the network was trained with A classes and the induction accuracy was measured on B classes. The baseline accuracy is the accuracy on B classes when the CNN was trained for all 1000 classes. The margin between the baseline and the induction accuracy indicates a drop in the networkâs ability to distinguish ï¬ne classes when being trained on coarse classes. The results show that features learnt by pre-training on just 127 classes still lead to fairly good induction. | 1608.08614#15 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 16 | and SUN-CLS is shown in Figure 1. For PASCAL-DET, the mean average precision (mAP) for CNNs with 1000, 500 and 250 images/class is found to be 58.3, 57.0 and 54.6. A similar trend is observed for PASCAL-ACT-CLS and SUN- CLS. These results indicate that using half the amount of pre-training data leads to only a marginal reduction in per- formance on transfer tasks. It is important to note that the performance on the ImageNet classiï¬cation task (the pre- training task) steadily increases with the amount of training data, whereas on transfer tasks, the performance increase with respect to additional pre-training data is signiï¬cantly slower. This suggests that while adding additional exam- ples to ImageNet classes will improve the ImageNet per- formance, it has diminishing return for transfer task perfor- mance.
# 5. How does the taxonomy of the pre-training task affect transfer performance?
In the previous section we investigated how varying number of pre-training images per class effects the perfor- mance in transfer tasks. Here we investigate the ï¬ip side: keeping the amount of data constant while changing the nomenclature of training labels.
# 5.1. The effect of number of pre-training classes on transfer performance | 1608.08614#16 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 17 | # 5.1. The effect of number of pre-training classes on transfer performance
down clustering). Using bottom up clustering, 18 possible taxonomies can be generated. Among these, we chose 5 sets of labels constituting 918, 753, 486, 79 and 9 classes respectively. Using top-down clustering only 3 label sets of 127, 10 and 2 can be generated, and we used the one with 127 classes. For studying the effect of number of pre- training classes on transfer performance, we trained sepa- rate AlexNet CNNs from scratch using these label sets. | 1608.08614#17 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 18 | Figure 2 shows the effect of number of pre-training classes obtained using bottom up clustering of WordNet tree on transfer performance. We also include the performance of these different networks on the Imagenet classiï¬cation task itself after ï¬netuning only the last layer to distinguish between all the 1000 classes. The results show that increase in performance on transfer tasks is signiï¬cantly slower with increase in number of classes as compared to performance on Imagenet itself. Using only 486 classes results in a per- formance drop of 1.7 mAP for PASCAL-DET, 0.8% accu- racy for SUN-CLS and a boost of 0.6 mAP for PASCAL- ACT-CLS. Table 1 shows the transfer performance after pre-training with 127 classes obtained from top down clus- tering. The results from this table and the ï¬gure indicate that only diminishing returns in transfer performance are observed when more than 127 classes are used. Our results also indicate that making the ImageNet classes ï¬ner will not help improve transfer performance. | 1608.08614#18 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 19 | The 1000 classes of the ImageNet challenge [35] are de- rived from leaves of the WordNet tree [11]. Using this tree, it is possible to generate different class taxonomies while keeping the total number of images constant. One can gen- erate taxonomies in two ways: (1) bottom up clustering, wherein the leaf nodes belonging to a common parent are iteratively clustered together (see Figure 3), or (2) by ï¬x- top ing the distance of the nodes from the root node (i.e.
It can be argued that the PASCAL task requires discrim- ination between only 20 classes and therefore pre-training with only 127 classes should not lead to substantial reduc- tion in performance. However, the trend also holds true for SUN-CLS that requires discrimination between 397 classes. These two results taken together suggest that although train- ing with a large number of classes is beneï¬cial, diminishing returns are observed beyond using 127 distinct classes for
4
Induction < 2 oO =) uel £ | 1608.08614#19 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 20 | 4
Induction < 2 oO =) uel £
Figure 5: Can feature embeddings obtained by training on coarse classes be able to distinguish ï¬ne classes they were never trained on? E.g. by training on monkeys, can the network pick out macaques? Here we look at the FC7 nearest neighbors (NN) of two randomly sampled images: a macaque (left column) and a giant schnauzer (right column), with each row showing feature embeddings trained with different number of classes (from ï¬ne to coarse). The row(s) above the dotted line indicate that the image class (i.e. macaque/giant schnauzer) was one of the training classes, whereas in rows below the image class was not present in the training set. Images in green indicate that the NN image belongs to the correct ï¬ne class (i.e. either macaque or giant schnauzer); orange indicates the correct coarse class (based on the WordNet hierarchy) but incorrect ï¬ne class; red indicated incorrect coarse class. All green images below the dotted line indicate instances of correct ï¬ne-grain nearest neighbor retrieval for features that were never trained on that class.
# pre-training. | 1608.08614#20 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 21 | # pre-training.
Furthermore, for PASCAL-ACT-CLS and SUN-CLS, ï¬netuning on CNNs pre-trained with class set sizes of 918, and 753 actually results in better performance than using all 1000 classes. This may indicate that having too many classes for pre-training works against learning good gener- alizable features. Hence, when generating a dataset, one should be attentive of the nomenclature of the classes.
# 5.2. Is ï¬ne-grain recognition necessary for learning transferable features?
# 5.3. Does training with coarse classes induce fea- tures relevant for ï¬ne-grained recognition?
Earlier, we have shown that the features learned on the 127 coarse classes perform almost as well on our transfer tasks as the full set of 1000 ImageNet classes. Here we will probe this further by asking a different question: is the feature embedding induced by the coarse class classiï¬ca- tion task capable of separating the ï¬ne labels of ImageNet (which it never saw at training)? | 1608.08614#21 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 22 | ImageNet challenge requires a classiï¬er to distinguish between 1000 classes, some of which are very ï¬ne-grained, such as different breeds of dogs and cats. Indeed, most hu- mans do not perform well on ImageNet unless speciï¬cally trained [35], and yet are easily able to perform most every- day visual tasks. This raises the question: is ï¬ne-grained recognition necessary for CNN models to learn good fea- ture representations, or is coarse-grained object recognition (e.g. just distinguishing cats from dogs) is sufï¬cient?
To investigate this, we used top-1 and top-5 nearest neighbors in the FC7 feature space to measure the ac- curacy of identifying ï¬ne-grained ImageNet classes after training only on a set of coarse classes. We call this mea- sure, âinduction accuracyâ. As a qualitative example, Fig- ure 5 shows nearest neighbors for a macaque (left) and a schnauzer (right) for feature embeddings trained on Ima- geNet but with different number of classes. All green- border images below the dotted line indicate instances of correct ï¬ne-grain nearest neighbor retrieval for features that were never trained on that class. | 1608.08614#22 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 23 | Note that the label set of 127 classes from the previous experiment contains 65 classes that are present in the origi- nal set of 1000 classes and the remainder are inner nodes of the WordNet tree. However, all these 127 classes (see sup- plementary materials) represent coarse semantic concepts. As discussed earlier, pre-training with these classes results in only a small drop in transfer performance (see Table 1). This suggests that performing ï¬ne-grained recognition is only marginally helpful and does not appear to be critical for learning good transferable features.
Quantitative results are shown in Figure 4. The results show that when 127 classes are used, ï¬ne-grained recogni- tion k-NN performance is only about 15% lower compared to training directly for these ï¬ne-grained classes (i.e. base- line accuracy). This is rather surprising and suggests that CNNs implicitly discover features capable of distinguish- ing between ï¬ner classes while attempting to distinguish between relatively coarse classes.
5
mammal (17%) snake (13%) arthropod (12%) turtle (10%) tool (3%) covering (3%) fabric (2%) fungus (2%) game equipment (2%) stick (1%) mollusk (1%) boat (1%) home appliance (1%) container (8%) garment (8%) structure (7%) fruit (7%) bird (7%) | 1608.08614#23 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 24 | Figure 6: Does the network learn to discriminate coarse seman- tic concepts by training only on ï¬ner sub-classes? The degree to which the concept of coarse class is learnt was quantiï¬ed by mea- suring the difference (in percentage points) between the accuracy of classifying the coarse class and the average accuracy of indi- vidually classifying all the sub-classes of this coarse class. Here, the top and bottom classes sorted by this metric are shown using the label set of size 127 with classes with at least 5 subclasses. We observe that classes whose subclasses are visually consistent (e.g. mammal) are better represented than these that are visually dissimilar (e.g. home appliance).
# 5.4. Does training with ï¬ne-grained classes induce features relevant for coarse recognition? | 1608.08614#24 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 25 | # 5.4. Does training with ï¬ne-grained classes induce features relevant for coarse recognition?
Investigating whether the network learns features rel- evant for ï¬ne-grained recognition by training on coarse classes raises the reverse question: does training with ï¬ne- grained classes induce features relevant for coarse recog- nition? If this is indeed the case, then we would expect that when a CNN makes an error, it is more likely to con- fuse a sub-class (i.e. error in ï¬ne-grained recognition) with other sub-classes of the same coarse class. This effect can be measured by computing the difference between the accu- racy of classifying the coarse class and the average accuracy of individually classifying all the sub-classes of this coarse class (please see supplementary materials for details).
Figure 6 shows the results. We ï¬nd that coarse seman- that contain tic classes such as mammal, fruit, bird, etc. visually similar sub-classes show the hypothesized effect, whereas classes such as tool and home appliance that con- tain visually dissimilar subclasses do not exhibit this effect. These results indicate that subclasses that share a common visual structure allow the CNN to learn features that are more generalizable. This might suggest a way to improve feature generalization by making class labels respect visual commonality rather than simply WordNet semantics. | 1608.08614#25 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 26 | # 5.5. More Classes or More Examples Per Class?
Results in previous sections show that it is possible to achieve good performance on transfer tasks using signiï¬- cantly less pre-training data and fewer pre-training classes. However it is unclear what is more important â the number of classes or the number or examples per class. One ex6
Dataset Data size More examples/class More classes 500K 57.1 57.0 PASCAL 250K 54.8 52.5 SUN 125K 500K 250K 125K 42.2 50.6 42.3 49.8 50.6 49.7 45.7 46.7
Table 2: For a ï¬xed budget of pre-training data, is it better to have more examples per class and fewer classes or vice-versa? The row âmore examples/classâ was pretrained with subsets of Ima- geNet containing 500, 250 and 125 classes with 1000 examples each. The row âmore classesâ was pretrained with 1000 classes, but 500, 250 and 125 examples each. Interestingly, the transfer performance on both PASCAL and SUN appears to be broadly similar under both scenarios.
Pre-trained Dataset ImageNet Pascal removed ImageNet Places PASCAL 58.3 ± 0.3 57.8 ± 0.1 53.8 ± 0.1 | 1608.08614#26 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 27 | Pre-trained Dataset ImageNet Pascal removed ImageNet Places PASCAL 58.3 ± 0.3 57.8 ± 0.1 53.8 ± 0.1
Table 3: PASCAL-DET results after pre-training on entire Im- ageNet, PASCAL-removed-ImageNet and Places data sets. Re- moving PASCAL classes from ImageNet leads to an insigniï¬cant reduction in performance.
treme is to only have 1 class and all 1.2M images from this class and the other extreme is to have 1.2M classes and 1 image per class. It is clear that both ways of splitting the data will result in poor generalization, so the answer must lie somewhere in-between. | 1608.08614#27 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 28 | To investigate this, we split the same amount of pre- training data in two ways: (1) more classes with fewer im- ages per class, and (2) fewer classes with more images per class. We use datasets of size 500K, 250K and 125K im- ages for this experiment. For 500K images, we considered two ways of constructing the training set â (1) 1000 classes with 500 images/class, and (2) 500 classes with 1000 im- ages/class. Similar splits were made for data budgets of 250K and 125K images. The 500, 250 and 125 classes for these experiments were drawn from a uniform distribution among the 1000 ImageNet classes. Similarly, the image subsets containing 500, 250 and 125 images were drawn from a uniform distribution among the images that belong to the class.
The results presented in Table 2 show that having more images per class with fewer number of classes results in features that perform very slightly better on PASCAL- DET, whereas for SUN-CLS, the performance is compara- ble across the two settings.
# 5.6. How important is to pre-train on classes that are also present in a target task?
It is natural to expect that higher correlation between pre- training and transfer tasks leads to better performance on a transfer task. This indeed has been shown to be true in [44]. One possible source of correlation between pre-training and
Minimal Split Random Split | 1608.08614#28 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 29 | Minimal Split Random Split
Figure 7: An illustration of the procedure used to split the Ima- geNet dataset. Splits were constructed in 2 different ways. The random split selects classes at random from the 1000 ImageNet classes. The minimal split is made in a manner that ensures no two classes in the same split have a common ancestor up to depth four of WordNet tree. Collage in Figure 8 visualizes the random and minimal splits.
transfer tasks are classes common to both tasks. In order to investigate how strong is the inï¬uence of these common classes, we ran an experiment where we removed all the classes from ImageNet that are contained in the PASCAL challenge. PASCAL has 20 classes, some of which map to more than one ImageNet class and thus, after applying this exclusion criterion we are only left with 771 ImageNet classes.
Table 3 compares the results on PASCAL-DET when the PASCAL-removed-ImageNet is used for pre-training against the original ImageNet and a baseline of pre- training on the Places [46] dataset. The PASCAL-removed- ImageNet achieves mAP of 57.8 (compared to 58.3 with the full ImageNet) indicating that training on ImageNet classes that are not present in PASCAL is sufï¬cient to learn features that are also good for PASCAL classes. | 1608.08614#29 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 30 | # 6. Does data augmentation from non-target classes always improve performance?
The analysis using PASCAL-removed ImageNet indi- cates that pre-training on non-PASCAL classes aids perfor- mance on PASCAL. This raises the question: is it always better to add pre-training data from additional classes that are not part of the target task? To investigate and test this hypothesis, we chose two different methods of splitting the ImageNet classes. The ï¬rst is random split, in which the 1000 ImageNet classes are split randomly; the second is a minimal split, in which the classes are deliberately split to ensure that similar classes are not in the same split, (Fig- ure 7). In order to determine if additional data helps perfor- mance for classes in split A, we pre-trained two CNNs â one for classifying all classes in split A and the other for clas- sifying all classes in both split A and B (i.e. full dataset). We then ï¬netuned the last layer of the network trained on the full dataset on split A only. If it is the case that addi7
Minimal Splits | 1608.08614#30 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 31 | Minimal Splits
Figure 8: Visualization of the random and minimal splits used for testing - is adding more pre-training data always useful? The two minimal sets contain disparate sets of objects. The minimal split A and B consists mostly of inanimate objects and living things re- spectively. On the other hand, random splits contain semantically similar objects.
tional data from split B helps performance on split A, then the CNN pre-trained with the full dataset should perform better than CNN pre-trained only on split A.
Using the random split, Figure 9 shows that the results of this experiment conï¬rms the intuition that additional data is indeed useful for both splits. However, under a random class split within ImageNet, we are almost certain to have extremely similar classes (e.g. two different breeds of dogs) ending up on the different sides of the split. So, what we have shown so far is that we can improve performance on, say, husky classiï¬cation by also training on poodles. Hence, the motivation for the minimal split: does adding arbitrary, unrelated classes, such as ï¬re trucks, help dog classiï¬ca- tion? | 1608.08614#31 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 32 | The classes in minimal split A do not share any common ancestor with minimal split B up until the nodes at depth 4 of the WordNet hierarchy (Figure 7). This ensures that any class in split A is sufï¬ciently disjoint from split B. Split A has 522 classes and split B has 478 classes (N.B.: for con- sistency, random splits A and B also had the same number of classes). In order to intuitively understand the difference between min splits A and B, we have visualized a random sample of images in these splits in Figure 8. Min split A consists of mostly static images and min split B consists of living objects.
Contrary to the earlier observation, Figure 9 shows that both min split A and B performs better than the full dataset when we ï¬netune only the last layer. This result is quite sur- prising because it shows that ï¬netuning the last layer from a network pre-trained on the full dataset, it is not possible
> 7 = Full Dataset £ 3 65 = Split Dataset S x a © O60 = joe wn wo 55 Zz wv oO 2 = E 50 Random Split A Random Split B Minimum Split A Minimum Split B | 1608.08614#32 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 33 | Figure 9: Does adding arbitrary classes to pre-training data al- ways improve transfer performance? This question was tested by training two CNNs, one for classifying classes in split A and other for classifying classes in split A and B both. We then ï¬netuned the CNN trained on both the splits on split A. If it is the case that adding more pre-training data helps, then performance of the CNN pre-trained on both the splits (black) should be higher than a CNN pre-trained on a single split (orange). For random splits, this indeed is the case, whereas for minimal splits adding more pre-training data hurts performance. This suggests, that additional pre-training data is useful only if it is correlated to the target task.
to match the performance of a network trained on just one split. We have observed that when training all the layers for an extensive amount of time (420K iterations), the accuracy of min split A does beneï¬t from pre-training on split B but does not for min split B. One explanation could be that im- ages in split B (e.g. person) is contained in images in split A, (e.g. buildings, clothing) but not vice versa. | 1608.08614#33 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 34 | While it might be possible to recover performance with very clever adjustments of learning rates, current results suggest that training with data from unrelated classes may push the network into a local minimum from which it might be hard to ï¬nd a better optima that can be obtained by train- ing the network from scratch.
# 7. Discussion
In this work we analyzed factors that affect the quality of ImageNet pre-trained features for transfer learning. Our goal was not to consider alternative neural network archi- tectures, but rather to establish facts about which aspects of the training data are important for feature learning.
The current consensus in the ï¬eld is that the key to learn- ing highly generalizable deep features is the large amounts of training data and the large number of classes.
To quote the inï¬uential R-CNN paper: â..success re- sulted from training a large CNN on 1.2 million labeled images...â [12]. After the publication of R-CNN, most re- searchers assumed that the full ImageNet is necessary to pre-train good general-purpose features. Our work quan- titatively questions this assumption, and yields some quite surprising results. For example, we have found that a sig8
niï¬cant reduction in the number of classes or the number of images used in pre-training has only a modest effect on transfer task performance. | 1608.08614#34 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 35 | niï¬cant reduction in the number of classes or the number of images used in pre-training has only a modest effect on transfer task performance.
While we do not have an explanation as to the cause of this resilience, we list some speculative possibilities that should inform further study of this topic:
⢠In our experiments, we investigated only one CNN ar- chitecture â AlexNet. While ImageNet-trained AlexNet features are currently the most popular starting point for ï¬ne-tuning on transfer tasks, there exist deeper architectures such as VGG [39], ResNet [15], and It would be interesting to see if our GoogLeNet [40]. ï¬ndings hold up on deeper networks. If not, it might suggest that AlexNet capacity is less than previously thought.
⢠Our results might indicate that researchers have been overestimating the amount of data required for learn- ing good general CNN features. If that is the case, it might suggest that CNN training is not as data-hungry as previously thought. It would also suggest that beat- ing ImageNet-trained features with models trained on a much bigger data corpus will be much harder than once thought. | 1608.08614#35 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 36 | ⢠Finally, it might be that the currently popular target tasks, such as PASCAL and SUN, are too similar to the origi- nal ImageNet task to really test the generalization of the learned features. Alternatively, perhaps a more appropri- ate approach to test the generalization is with much less ï¬ne-tuning (e.g. one-shot-learning) or no ï¬ne-tuning at all (e.g. nearest neighbour in the learned feature space).
In conclusion, while the answer to the titular question âWhat makes ImageNet good for transfer learning?â still lacks a deï¬nitive answer, our results have shown that a lot of âfolk wisdomâ on why ImageNet works well is not ac- curate. We hope that this paper will pique our colleaguesâ curiosity and facilitate further research on this fascinating topic.
# 8. Acknowledgements
This work was supported in part by ONR MURI N00014-14-1-0671. We gratefully acknowledge NVIDIA corporation for the donation of K40 GPUs and access to the NVIDIA PSG cluster for this research. We would like to acknowledge the support from the Berkeley Vision and Learning Center (BVLC) and Berkeley DeepDrive (BDD). Minyoung Huh was partially supported by the Rose Hill Foundation.
# References | 1608.08614#36 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 37 | # References
[1] P. Agrawal, J. Carreira, and J. Malik. Learning to see by moving. In Proceedings of the IEEE International Confer- ence on Computer Vision, pages 37â45, 2015.
[2] P. Agrawal, R. Girshick, and J. Malik. Analyzing the perfor- mance of multilayer neural networks for object recognition. In Computer VisionâECCV 2014, pages 329â344. Springer, 2014.
[3] H. Azizpour, A. Razavian, J. Sullivan, A. Maki, and S. Carls- son. From generic to speciï¬c deep representations for vi- sual recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 36â45, 2015.
[4] Y. Bengio, A. C. Courville, and P. Vincent. Unsupervised feature learning and deep learning: A review and new per- spectives. CoRR, abs/1206.5538, 1, 2012.
[5] H. Bourlard and Y. Kamp. Auto-association by multilayer perceptrons and singular value decomposition. Biological cybernetics, 59(4-5):291â294, 1988. | 1608.08614#37 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
1608.08614 | 38 | [6] J. Carreira, P. Agrawal, K. Fragkiadaki, and J. Malik. Human pose estimation with iterative error feedback. arXiv preprint arXiv:1507.06550, 2015.
Instance-aware semantic seg- mentation via multi-task network cascades. arXiv preprint arXiv:1512.04412, 2015.
[8] C. Doersch, A. Gupta, and A. A. Efros. Unsupervised vi- sual representation learning by context prediction. In Pro- ceedings of the IEEE International Conference on Computer Vision, pages 1422â1430, 2015.
S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Dar- rell. Long-term recurrent convolutional networks for visual In Proceedings of the IEEE recognition and description. Conference on Computer Vision and Pattern Recognition, pages 2625â2634, 2015.
[10] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional acti- vation feature for generic visual recognition. arXiv preprint arXiv:1310.1531, 2013. | 1608.08614#38 | What makes ImageNet good for transfer learning? | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class? | http://arxiv.org/pdf/1608.08614 | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20160830 | 20161210 | [
{
"id": "1507.06550"
},
{
"id": "1504.02518"
},
{
"id": "1512.04412"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.