id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1609.02200#95 | Discrete Variational Autoencoders | While Equation 31 appears similar to REINFORCE, it is better understood as an importance- weighted estimate of an efï¬ cient gradient calculation. Just as a ReLU only has a nonzero gradi- ent in the linear regime, â zi â Ï effectively only has a nonzero gradient when zi = 0, in which case â qi(zi=1) â zi . Unlike in REINFORCE, we do effectively differentiate the reward, Wijzizj. â Ï â ¼ â Ï Moreover, the number of terms contributing to each gradient â qi(zi=1) ber of units in an RBM, whereas it grows quadratically in the method of Section F.3. # G MOTIVATION FOR BUILDING APPROXIMATING POSTERIOR AND PRIOR HIERARCHIES IN THE SAME ORDER Intuition regarding the difï¬ culty of approximating the posterior distribution over the latent variables given the data can be developed by considering sparse coding, an approach that uses a basis set of spatially locallized ï¬ lters (Olshausen & Field, 1996). The basis set is overcomplete, and there are generally many basis elements similar to any selected basis element. However, the sparsity prior pushes the posterior distribution to use only one amongst each set of similar basis elements. As a result, there is a large set of sparse representations of roughly equivalent quality for any single input. Each basis element individually can be replaced with a similar basis element. However, having changed one basis element, the optimal choice for the adjacent elements also changes so the ï¬ lters mesh properly, avoiding redundancy or gaps. The true posterior is thus highly correlated, since even after conditioning on the input, the probability of a given basis element depends strongly on the selection of the adjacent basis elements. These equivalent representations can easily be disambiguated by the successive layers of the rep- resentation. In the simplest case, the previous layer could directly specify which correlated set of basis elements to use amongst the applicable sets. We can therefore achieve greater efï¬ ciency by inferring the approximating posterior over the top-most latent layer ï¬ | 1609.02200#94 | 1609.02200#96 | 1609.02200 | [
"1602.08734"
] |
1609.02200#96 | Discrete Variational Autoencoders | rst. Only then do we compute the conditional approximating posteriors of lower layers given a sample from the approximating posterior of the higher layers, breaking the symmetry between representations of similar quality. # H ARCHITECTURE The stochastic approximation to the ELBO is computed via one pass down the approximating pos- terior (Figure 4a), sampling from each continuous latent layer ζi and zm>1 in turn; and another pass down the prior (Figure 4b), conditioned on the sample from the approximating posterior. In the pass down the prior, signals do not ï¬ ow from layer to layer through the entire model. Rather, the input to each layer is determined by the approximating posterior of the previous layers, as follows from Equation 14. The gradient is computed by backpropagating the reconstruction log-likelihood, and the KL divergence between the approximating posterior and true prior at each layer, through this differentiable structure. 19It might also be the case that ζi = 0 when zi = 1, but with our choice of r(ζ|z), this has vanishingly small probability. 20This takes advantage of the fact that zi â {0, 1}. 27 | 1609.02200#95 | 1609.02200#97 | 1609.02200 | [
"1602.08734"
] |
1609.02200#97 | Discrete Variational Autoencoders | Published as a conference paper at ICLR 2017 All hyperparameters were tuned via manual experimentation. Except in Figure 6, RBMs have 128 units (64 units per side, with full bipartite connections between the two sides), with 4 layers of hierarchy in the approximating posterior. We use 100 iterations of block Gibbs sampling, with 20 persistent chains per element of the minibatch, to sample from the prior in the stochastic approxi- mation to Equation 11. When using the hierarchy of continuous latent variables described in Section 4, discrete VAEs overï¬ t if any component of the prior is overparameterized, as shown in Figure 9a. In contrast, a larger and more powerful approximating posterior generally did not reduce performance within the range examined, as in Figure 9b. In response, we manually tuned the number of layers of continuous latent variables, the number of such continuous latent variables per layer, the number of deterministic hidden units per layer in the neural network deï¬ ning each hierarchical layer of the prior, and the use of parameter sharing in the prior. We list the selected values in Table 2. All neural networks implementing components of the approximating posterior contain two hidden layers of 2000 units. (a) Prior (b) Approximating posterior | 1609.02200#96 | 1609.02200#98 | 1609.02200 | [
"1602.08734"
] |
1609.02200#98 | Discrete Variational Autoencoders | Figure 9: Log likelihood on statically binarized MNIST versus the number of hidden units per neural network layer, in the prior (a) and approximating posterior (b). The number of deterministic hidden layers in the networks parameterizing the prior/approximating posterior is 1 (blue), 2 (red), 3 (green) in (a/b), respectively. The number of deterministic hidden layers in the ï¬ nal network parameterizing z) is 0 (solid) or 1 (dashed). All models use only 10 layers of continuous latent variables, with p(x | no parameter sharing. Num layers Vars per layer Hids per prior layer Param sharing MNIST (dyn bin) MNIST (static bin) Omniglot Caltech-101 Sil 18 20 16 12 64 256 256 80 1000 2000 800 100 none 2 groups 2 groups complete Table 2: Architectural hyperparameters used for each dataset. Successive columns list the number of layers of continuous latent variables, the number of such continuous latent variables per layer, the number of deterministic hidden units per layer in the neural network deï¬ ning each hierarchical layer of the prior, and the use of parameter sharing in the prior. Smaller datasets require more regularization, and achieve optimal performance with a smaller prior. On statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes, we further regularize using recurrent parameter sharing. In the simplest case, each p (3m|3i<m,9) and p(a|3,@) is a func- tion of rem 3b rather than a function of the concatenation [30,31,---,3m-â i]. Moreover, all P(3m>1|31<m,9) share parameters. The RBM layer 39 is rendered compatible with this parame- terization by using a trainable linear transformation of ¢, M - ¢; where the number of rows in M is | 1609.02200#97 | 1609.02200#99 | 1609.02200 | [
"1602.08734"
] |
1609.02200#99 | Discrete Variational Autoencoders | 28 Published as a conference paper at ICLR 2017 equal to the number of variables in each zm>0. We refer to this architecture as complete recurrent parameter sharing. On datasets of intermediate size, a degree of recurrent parameter sharing somewhere between full independence and complete sharing is beneï¬ cial. We deï¬ ne the n group architecture by dividing the 1 into n equally sized groups of consecutive layers. Each such group is continuous latent layers zm independently subject to recurrent parameter sharing analogous to the complete sharing architecture, and the RBM layer z0 is independently parameterized. We use the spike-and-exponential transformation described in Section 2.1. The exponent is a train- able parameter, but it is bounded above by a value that increases linearly with the number of training epochs. We use warm-up with strength 20 for 5 epochs, and additional warm-up of strength 2 on the RBM alone for 20 epochs (Raiko et al., 2007; Bowman et al., 2016; Sønderby et al., 2016). z) is linear, all nonlinear transformations are part of the prior over the latent variables. When p(x In contrast, it is also possible to deï¬ ne the prior distribution over the continuous latent variables to be a simple factorial distribution, and push the nonlinearity into the ï¬ nal decoder p(x z), as in | traditional VAEs. The former case can be reduced to something analogous to the latter case using the reparameterization trick. However, a VAE with a completely independent prior does not regularize the nonlinearity of the prior; whereas a hierarchical prior requires that the nonlinearity of the prior (via its effect on the true posterior) be well-represented by the approximating posterior. Viewed another way, a com- pletely independent prior requires the model to consist of many independent sources of variance, so the data manifold must be fully unfolded into an isotropic ball. A hierarchical prior allows the data manifold to remain curled within a higher-dimensional ambient space, with the approximating posterior merely tracking its contortions. A higher-dimensional ambient space makes sense when modeling multiple classes of objects. For instance, the parameters characterizing limb positions and orientations for people have no analog for houses. | 1609.02200#98 | 1609.02200#100 | 1609.02200 | [
"1602.08734"
] |
1609.02200#100 | Discrete Variational Autoencoders | # H.1 ESTIMATING THE LOG PARTITION FUNCTION We estimate the log-likelihood by subtracting an estimate of the log partition function of the RBM (log Zp from Equation 6) from an importance-weighted computation analogous to that of Burda et al. (2016). For this purpose, we estimate the log partition function using bridge sampling, a variant of Bennettâ s acceptance ratio method (Bennett, 1976; Shirts & Chodera, 2008), which produces unbiased estimates of the partition function. Interpolating distributions were of the form p(x)β, and sampled with a parallel tempering routine (Swendsen & Wang, 1986). The set of smoothing parameters β in [0, 1] were chosen to approximately equalize replica exchange rates at 0.5. This standard criteria simultaneously keeps mixing times small, and allows for robust inference. We make a conservative estimate for burn-in (0.5 of total run time), and choose the total length of run, and number of repeated experiments, to achieve sufï¬ cient statistical accuracy in the log partition function. In Figure 10, we plot the distribution of independent estimations of the log-partition function for a single model of each dataset. These estimates differ by no more than about 0.1, indicating that the estimate of the log-likelihood should be accurate to within about 0.05 nats. # H.2 CONSTRAINED LAPLACIAN BATCH NORMALIZATION Rather than traditional batch normalization (Ioffe & Szegedy, 2015), we base our batch normaliza- tion on the L1 norm. | 1609.02200#99 | 1609.02200#101 | 1609.02200 | [
"1602.08734"
] |
1609.02200#101 | Discrete Variational Autoencoders | Speciï¬ cally, we use: y=x-xX Xn = y/ (B+6) Osto, where x is a minibatch of scalar values, X denotes the mean of x, © indicates element-wise mul- tiplication, â ¬ is a small positive constant, s is a learned scale, and o is a learned offset. For the approximating posterior over the RBM units, we bound 2 < s < 3, and â -s < o < s. This helps ensure that all units are both active and inactive in each minibatch, and thus that all units are used. | 1609.02200#100 | 1609.02200#102 | 1609.02200 | [
"1602.08734"
] |
1609.02200#102 | Discrete Variational Autoencoders | 29 Published as a conference paper at ICLR 2017 (a) MNIST (dyn bin) (b) MNIST (static bin) (c) Omniglot (d) Caltech-101 Silhouettes Figure 10: Distribution of estimates of the log-partition function, using Bennettâ s acceptance ratio method with parallel tempering, for a single model trained on dynamically binarized MNIST (a), statically binarized MNIST (b), Omniglot (c), and Caltech-101 Silhouettes (d) # I COMPARISON MODELS In Table 1, we compare the performance of the discrete variational autoencoder to a selection of recent, competitive models. For dynamically binarized MNIST, we compare to deep belief networks (DBN; Hinton et al., 2006), reporting the results of Murray & Salakhutdinov (2009); importance- weighted autoencoders (IWAE; Burda et al., 2016); and ladder variational autoencoders (Ladder VAE; Sønderby et al., 2016). For the static MNIST binarization of (Salakhutdinov & Murray, 2008), we compare to Hamilto- nian variational inference (HVI; Salimans et al., 2015); the deep recurrent attentive writer (DRAW; Gregor et al., 2015); the neural adaptive importance sampler with neural autoregressive distribution estimator (NAIS NADE; Du et al., 2015); deep latent Gaussian models with normalizing ï¬ | 1609.02200#101 | 1609.02200#103 | 1609.02200 | [
"1602.08734"
] |
1609.02200#103 | Discrete Variational Autoencoders | ows (Nor- malizing ï¬ ows; Rezende & Mohamed, 2015); and the variational Gaussian process (Tran et al., 2016). On Omniglot, we compare to the importance-weighted autoencoder (IWAE; Burda et al., 2016); ladder variational autoencoder (Ladder VAE; Sønderby et al., 2016); and the restricted Boltzmann machine (RBM; Smolensky, 1986) and deep belief network (DBN; Hinton et al., 2006), reporting the results of Burda et al. (2015). Finally, for Caltech-101 Silhouettes, we compare to the importance-weighted autoencoder (IWAE; Burda et al., 2016), reporting the results of Li & Turner (2016); reweighted wake-sleep with a deep sigmoid belief network (RWS SBN; Bornschein & Bengio, 2015); the restricted Boltzmann machine (RBM; Smolensky, 1986), reporting the results of Cho et al. (2013); and the neural adaptive importance sampler with neural autoregressive distribution estimator (NAIS NADE; Du et al., 2015). | 1609.02200#102 | 1609.02200#104 | 1609.02200 | [
"1602.08734"
] |
1609.02200#104 | Discrete Variational Autoencoders | 30 Published as a conference paper at ICLR 2017 haan MPN BDLKRAG ANAIwRNY OWS PAWADDDROCHUWPYYNHKELGN PEIwWHWMODDSXHAL MUP NW EDNAD YA OSHOVEKR BATES WHA SWAN OHINSOSSCHOYAN Ke wKweeawys \YUwen wk SYVAOBOCHACH AXwWhrwo 2aDwaA YU AWYMWHArNWDOSOA MH VMOHKY NPN HOVDdhHANHW HAPTAOSCISSIGOCISGOoSCeHAAN WHENRNDOCOVSOCFAUSOOCOOSOHALS AXSFEVOSASOeASOSOSCE ARYA BATAONSCHOHVNAHSCHSOOSARSN NN KT RWEKUNYAMY YN PVU~~NY DN] BD LBW NANYDH PNY ~W NNHKORTWNWRMYWYMN PNY HWS WN HKOKRON RMU VDNHVN~~WA NN SRO RRONKAKQOUUPN~Y-AWI N=-GQhNi Ny SGInNwOOYWVaeeruns NAWAN YP BIND LINV HOB LY NH-@sinpGInwerqyvevwbwrnxvr RH GhHNnVN HIAVYVRDOENYNVEoeUnnr~ Râ -&ORNYPP FINVSWLlOQYvew porn NLKDMIVYVOGSOSA Ke eTndnw Qwuns WMRLIGINOSCHOALCKKRASEARQAWUVHDH / s s § 6 l a ? a 2 (4 2 ! / & 4 7 \ 3 3 WWH-Ye&e ry eNO LYN Pâ FeEWOA~ WeeNeareNen L&yvy P- Fen yH~ IES DMOACYWYHWW â â DR wD~ NEP HWG AKRAHOOOBOSASCHGODVCSCHRARS YIISeb fwwW~-~Hâ r~r~NeQowW Are andToerunvr pre rdsh NEQereehywasw-~-~-~â -â ~ â | 1609.02200#103 | 1609.02200#105 | 1609.02200 | [
"1602.08734"
] |
1609.02200#105 | Discrete Variational Autoencoders | rv eavywy SDRBWoo eww we-â -~â -~~nNn eK WY AYR ewQtrwuew-â -~-â ~ ~NwMQqwa f Ss $§ %% & be 14 AA 22 22 22 te 22 tt 7 & & 94 77 4\ 33 3 3 Figure 11: Evolution of samples from a discrete VAE trained on statically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. Vertical sequences in which the digit ID remains constant demonstrate that the RBM has distinct modes, each of which corresponds to a single digit ID, despite being trained in a wholly unsupervised manner. # J SUPPLEMENTARY RESULTS To highlight the contribution of the various components of our generative model, we investigate performance on a selection of simpliï¬ ed models.21 First, we remove the continuous latent layers. The resulting prior, depicted in Figure 1b, consists of the bipartite Boltzmann machine (RBM), the smoothing variables ζ, and a factorial Bernoulli distribution over the observed variables x deï¬ ned via a deep neural network with a logistic ï¬ nal layer. This probabilistic model achieves a log-likelihood 85.2 with 200 RBM units. of | 1609.02200#104 | 1609.02200#106 | 1609.02200 | [
"1602.08734"
] |
1609.02200#106 | Discrete Variational Autoencoders | â â Next, we further restrict the neural network deï¬ ning the distribution over the observed variables x given the smoothing variables ζ to consist of a linear transformation followed by a pointwise logistic nonlinearity, analogous to a sigmoid belief network (SBN; Spiegelhalter & Lauritzen, 1990; Neal, 88.8 with 1992). This decreases the negative log-likelihood to 200 RBM units. We then remove the lateral connections in the RBM, reducing it to a set of independent binary random variables. The resulting network is a noisy sigmoid belief network. That is, samples are produced by drawing samples from the independent binary random variables, multiplying by an independent noise source, and then sampling from the observed variables as in a standard SBN. With this SBN-like architecture, the discrete variational autoencoder achieves a log-likelihood of | 1609.02200#105 | 1609.02200#107 | 1609.02200 | [
"1602.08734"
] |
1609.02200#107 | Discrete Variational Autoencoders | â Finally, we replace the hierarchical approximating posterior of Figure 3a with the factorial approxi- mating posterior of Figure 1a. This simpliï¬ cation of the approximating posterior, in addition to the prior, reduces the log-likelihood to â 21In all cases, we report the negative log-likelihood on statically binarized MNIST (Salakhutdinov & Mur- ray, 2008), estimated with 104 importance weighted samples (Burda et al., 2016). 31 Published as a conference paper at ICLR 2017 | 1609.02200#106 | 1609.02200#108 | 1609.02200 | [
"1602.08734"
] |
1609.02200#108 | Discrete Variational Autoencoders | Sk Oya o Figure 12: Evolution of samples from a discrete VAE trained on Omniglot, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. Ce et | oe 4 moe COP < Sk Goes eeer RAK Eun | doddad gada aaaa atte Asad? ety? fete vere wee RHha eh ere Figure 13: | 1609.02200#107 | 1609.02200#109 | 1609.02200 | [
"1602.08734"
] |
1609.02200#109 | Discrete Variational Autoencoders | Evolution of samples from a discrete VAE trained on Caltech-101 Silhouettes, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. Vertical sequences in which the silhouette shape remains similar demonstrate that the RBM has distinct modes, each of which corresponds to a single silhouette type, despite being trained in a wholly unsupervised manner. | 1609.02200#108 | 1609.02200#110 | 1609.02200 | [
"1602.08734"
] |
1609.02200#110 | Discrete Variational Autoencoders | 32 Published as a conference paper at ICLR 2017 Figures 11, 12, and 13 repeat the analysis of Figure 5 for statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes. Speciï¬ cally, they show the generative output of a discrete VAE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held constant across each sub-row of ï¬ ve samples, and variation amongst these samples is due to the layers of continuous latent variables. Given a multimodal distribution with well-separated modes, Gibbs sampling passes through the large, low-probability space between the modes only infrequently. As a result, consistency of the object class over many successive rows in Figures 11, 12, and 13 indicates that the RBM prior has well-separated modes. On statically binarized MNIST, the RBM still learns distinct, separated modes corresponding to most of the different digit types. However, these modes are not as well separated as in dynamically binarized MNIST, as is evident from the more rapid switching between digit types in Figure 11. There are not obvious modes for Omniglot in Figure 12; it is plausible that an RBM with 128 units could not represent enough well-separated modes to capture the large number of distinct character types in the Omniglot dataset. On Caltech-101 Silhouettes, there may be a mode corresponding to large, roughly convex blobs. | 1609.02200#109 | 1609.02200#111 | 1609.02200 | [
"1602.08734"
] |
1609.02200#111 | Discrete Variational Autoencoders | 33 | 1609.02200#110 | 1609.02200 | [
"1602.08734"
] |
|
1608.08710#0 | Pruning Filters for Efficient ConvNets | 7 1 0 2 r a M 0 1 ] V C . s c [ 3 v 0 1 7 8 0 . 8 0 6 1 : v i X r a Published as a conference paper at ICLR 2017 # PRUNING FILTERS FOR EFFICIENT CONVNETS Hao Liâ University of Maryland [email protected] Asim Kadav NEC Labs America [email protected] Igor Durdanovic NEC Labs America [email protected] | 1608.08710#1 | 1608.08710 | [
"1602.07360"
] |
|
1608.08710#1 | Pruning Filters for Efficient ConvNets | Hanan Sametâ University of Maryland [email protected] Hans Peter Graf NEC Labs America [email protected] # ABSTRACT The success of CNNs in various applications is accompanied by a signiï¬ cant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a signiï¬ cant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune ï¬ lters from CNNs that are identiï¬ ed as having a small effect on the output accuracy. By removing whole ï¬ lters in the network together with their connecting feature maps, the computation costs are reduced signiï¬ | 1608.08710#0 | 1608.08710#2 | 1608.08710 | [
"1602.07360"
] |
1608.08710#2 | Pruning Filters for Efficient ConvNets | cantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efï¬ cient BLAS libraries for dense matrix multiplications. We show that even simple ï¬ lter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks. # INTRODUCTION The ImageNet challenge has led to signiï¬ cant advancements in exploring various architectural choices in CNNs (Russakovsky et al. (2015); Krizhevsky et al. (2012); Simonyan & Zisserman (2015); Szegedy et al. (2015a); He et al. (2016)). The general trend since the past few years has been that the networks have grown deeper, with an overall increase in the number of parameters and convolution operations. These high capacity networks have signiï¬ cant inference costs especially when used with embedded sensors or mobile devices where computational and power resources may be limited. For these applications, in addition to accuracy, computational efï¬ ciency and small network sizes are crucial enabling factors (Szegedy et al. (2015b)). In addition, for web services that provide image search and image classiï¬ cation APIs that operate on a time budget often serving hundreds of thousands of images per second, beneï¬ t signiï¬ cantly from lower inference times. There has been a signiï¬ cant amount of work on reducing the storage and computation costs by model compression (Le Cun et al. (1989); Hassibi & Stork (1993); Srinivas & Babu (2015); Han et al. (2015); Mariet & Sra (2016)). Recently Han et al. (2015; 2016b) report impressive compression rates on AlexNet (Krizhevsky et al. (2012)) and VGGNet (Simonyan & Zisserman (2015)) by pruning weights with small magnitudes and then retraining without hurting the overall accuracy. | 1608.08710#1 | 1608.08710#3 | 1608.08710 | [
"1602.07360"
] |
1608.08710#3 | Pruning Filters for Efficient ConvNets | However, pruning parameters does not necessarily reduce the computation time since the majority of the parameters removed are from the fully connected layers where the computation cost is low, e.g., the fully connected layers of VGG-16 occupy 90% of the total parameters but only contribute less than 1% of the overall ï¬ oating point operations (FLOP). They also demonstrate that the convolutional layers can be compressed and accelerated (Iandola et al. (2016)), but additionally require sparse | 1608.08710#2 | 1608.08710#4 | 1608.08710 | [
"1602.07360"
] |
1608.08710#4 | Pruning Filters for Efficient ConvNets | â Work done at NEC Labs â Supported in part by the NSF under Grant IIS-13-2079 1 Published as a conference paper at ICLR 2017 BLAS libraries or even specialized hardware (Han et al. (2016a)). Modern libraries that provide speedup using sparse operations over CNNs are often limited (Szegedy et al. (2015a); Liu et al. (2015)) and maintaining sparse data structures also creates an additional storage overhead which can be signiï¬ cant for low-precision weights. Recent work on CNNs have yielded deep architectures with more efï¬ cient design (Szegedy et al. (2015a;b); He & Sun (2015); He et al. (2016)), in which the fully connected layers are replaced with average pooling layers (Lin et al. (2013); He et al. (2016)), which reduces the number of parameters signiï¬ | 1608.08710#3 | 1608.08710#5 | 1608.08710 | [
"1602.07360"
] |
1608.08710#5 | Pruning Filters for Efficient ConvNets | cantly. The computation cost is also reduced by downsampling the image at an early stage to reduce the size of feature maps (He & Sun (2015)). Nevertheless, as the networks continue to become deeper, the computation costs of convolutional layers continue to dominate. CNNs with large capacity usually have signiï¬ cant redundancy among different ï¬ lters and feature channels. In this work, we focus on reducing the computation cost of well-trained CNNs by pruning ï¬ lters. Compared to pruning weights across the network, ï¬ lter pruning is a naturally structured way of pruning without introducing sparsity and therefore does not require using sparse libraries or any specialized hardware. The number of pruned ï¬ lters correlates directly with acceleration by reducing the number of matrix multiplications, which is easy to tune for a target speedup. In addition, instead of layer-wise iterative ï¬ ne-tuning (retraining), we adopt a one-shot pruning and retraining strategy to save retraining time for pruning ï¬ lters across multiple layers, which is critical for pruning very deep networks. Finally, we observe that even for ResNets, which have signiï¬ cantly fewer parameters and inference costs than AlexNet or VGGNet, still have about 30% of FLOP reduction without sacriï¬ | 1608.08710#4 | 1608.08710#6 | 1608.08710 | [
"1602.07360"
] |
1608.08710#6 | Pruning Filters for Efficient ConvNets | cing too much accuracy. We conduct sensitivity analysis for convolutional layers in ResNets that improves the understanding of ResNets. # 2 RELATED WORK The early work by Le Cun et al. (1989) introduces Optimal Brain Damage, which prunes weights with a theoretically justiï¬ ed saliency measure. Later, Hassibi & Stork (1993) propose Optimal Brain Surgeon to remove unimportant weights determined by the second-order derivative information. Mariet & Sra (2016) reduce the network redundancy by identifying a subset of diverse neurons that does not require retraining. However, this method only operates on the fully-connected layers and introduce sparse connections. To reduce the computation costs of the convolutional layers, past work have proposed to approximate convolutional operations by representing the weight matrix as a low rank product of two smaller matrices without changing the original number of ï¬ lters (Denil et al. (2013); Jaderberg et al. (2014); Zhang et al. (2015b;a); Tai et al. (2016); Ioannou et al. (2016)). Other approaches to reduce the convolutional overheads include using FFT based convolutions (Mathieu et al. (2013)) and fast convolution using the Winograd algorithm (Lavin & Gray (2016)). Additionally, quantization (Han et al. (2016b)) and binarization (Rastegari et al. (2016); Courbariaux & Bengio (2016)) can be used to reduce the model size and lower the computation overheads. Our method can be used in addition to these techniques to reduce computation costs without incurring additional overheads. Several work have studied removing redundant feature maps from a well trained network (Anwar et al. (2015); Polyak & Wolf (2015)). Anwar et al. (2015) introduce a three-level pruning of the weights and locate the pruning candidates using particle ï¬ ltering, which selects the best combination from a number of random generated masks. Polyak & Wolf (2015) detect the less frequently activated feature maps with sample input data for face detection applications. We choose to analyze the ï¬ lter weights and prune ï¬ lters with their corresponding feature maps using a simple magnitude based measure, without examining possible combinations. We also introduce network-wide holistic approaches to prune ï¬ | 1608.08710#5 | 1608.08710#7 | 1608.08710 | [
"1602.07360"
] |
1608.08710#7 | Pruning Filters for Efficient ConvNets | lters for simple and complex convolutional network architectures. Concurrently with our work, there is a growing interest in training compact CNNs with sparse constraints (Lebedev & Lempitsky| (2016); |Zhou et al. (2016); Wen et al. (2016}). Lebedev & Lempitsky| (2016) leverage group-sparsity on the convolutional filters to achieve structured brain damage, i.e., prune the entries of the convolution kernel in a group-wise fashion. (2016) add group-sparse regularization on neurons during training to learn compact CNNs with reduced filters. [Wen et al-|(2016) add structured sparsity regularizer on each layer to reduce trivial filters, channels or even layers. In the filter-level pruning, all above work use ¢21-norm as a regularizer. | 1608.08710#6 | 1608.08710#8 | 1608.08710 | [
"1602.07360"
] |
1608.08710#8 | Pruning Filters for Efficient ConvNets | 2 Published as a conference paper at ICLR 2017 Similar to the above work, we use ¢;-norm to select unimportant filters and physically prune them. Our fine-tuning process is the same as the conventional training procedure, without introducing additional regularization. Our approach does not introduce extra layer-wise meta-parameters for the regularizer except for the percentage of filters to be pruned, which is directly related to the desired speedup. By employing stage-wise pruning, we can set a single pruning rate for all layers in one stage. # 3 PRUNING FILTERS AND FEATURE MAPS Let ni denote the number of input channels for the ith convolutional layer and hi/wi be the height/width of the input feature maps. The convolutional layer transforms the input feature maps xi â | 1608.08710#7 | 1608.08710#9 | 1608.08710 | [
"1602.07360"
] |
1608.08710#9 | Pruning Filters for Efficient ConvNets | Rnià hià wi into the output feature maps xi+1 â Rni+1à hi+1à wi+1, which are used as in- put feature maps for the next convolutional layer. This is achieved by applying ni+1 3D ï¬ lters Fi,j â Rnià kà k on the ni input channels, in which one ï¬ lter generates one feature map. Each ï¬ lter is composed by ni 2D kernels K â Rkà k (e.g., 3 à 3). All the ï¬ lters, together, constitute the kernel matrix Fi â Rnià ni+1à kà k. | 1608.08710#8 | 1608.08710#10 | 1608.08710 | [
"1602.07360"
] |
1608.08710#10 | Pruning Filters for Efficient ConvNets | The number of operations of the convolutional layer is ni+1nik2hi+1wi+1. As shown in Figure 1, when a ï¬ lter Fi,j is pruned, its corresponding feature map xi+1,j is removed, which reduces nik2hi+1wi+1 operations. The kernels that apply on the removed feature maps from the ï¬ lters of the next convolutional layer are also removed, which saves an additional ni+2k2hi+2wi+2 operations. Pruning m ï¬ lters of layer i will reduce m/ni+1 of the computation cost for both layers i and i + 1. kernel matrix Fig, f nr Nit hj HHH} nist Ni+2 Xi Xi+1 Xi+2 Figure 1: Pruning a ï¬ lter results in removal of its corresponding feature map and related kernels in the next layer. 3.1 DETERMINING WHICH FILTERS TO PRUNE WITHIN A SINGLE LAYER Our method prunes the less useful filters from a well-trained model for computational efficiency while minimizing the accuracy drop. We measure the relative importance of a filter in each layer by calculating the sum of its absolute weights )> |F;,;|, i.e., its ¢;-norm ||F;,;||1. Since the number of input channels, n;, is the same across filters, }> |F;,;| also represents the average magnitude of its kernel weights. This value gives an expectation of the magnitude of the output feature map. Filters with smaller kernel weights tend to produce feature maps with weak activations as compared to the other filters in that layer. Figure [2(a)]illustrates the distribution of filtersâ absolute weights sum for each convolutional layer in a VGG-16 network trained on the CIFAR-10 dataset, where the distribution varies significantly across layers. We find that pruning the smallest filters works better in comparison with pruning the same number of random or largest filters (Section|4.4). Compared to other criteria for activation-based feature map pruning (Section|4.5), we find ¢;-norm is a good criterion for data-free filter selection. The procedure of pruning m ï¬ lters from the ith convolutional layer is as follows: 1. For each filter F;,;, calculate the sum of its absolute kernel weights s; = )7/!!, >> |Kil. 2. | 1608.08710#9 | 1608.08710#11 | 1608.08710 | [
"1602.07360"
] |
1608.08710#11 | Pruning Filters for Efficient ConvNets | Sort the filters by sj. 3. Prune m filters with the smallest sum values and their corresponding feature maps. The kernels in the next convolutional layer corresponding to the pruned feature maps are also removed. >> |Kil. 4. A new kernel matrix is created for both the ith and i + 1th layers, and the remaining kernel weights are copied to the new model. 3 Published as a conference paper at ICLR 2017 (a) Filters are ranked by sj (b) Prune the smallest ï¬ lters (c) Prune and retrain 94 CIFARI0, VGG-16, prune smallest filters. retrain 20 epochs a0 % % Fiters Prunea awayi%) CIFAR-10, VGG-16 conv normalized abs sum of iter weight = conv 13 To) 120380 oa 3 fier index /#fters (6) CCIFARIO, VGG-16, pruned smallest filters * conv.2 64 + conv 3 128 + conv4 128 co|[e-* conv_5 256 e* conv.6 256 so|]e-e cony_7 256 © conv.8 512 © conv.9 512 © conv.10 512 © convi1512 2o|{° © conv 12 512 © conv13512 pecuracy 0 Es a0 % Fiters Pruned Awayi%) Figure 2: (a) Sorting ï¬ lters by absolute weights sum for each layer of VGG-16 on CIFAR-10. The x-axis is the ï¬ lter index divided by the total number of ï¬ lters. The y-axis is the ï¬ lter weight sum divided by the max sum value among ï¬ lters in that layer. (b) Pruning ï¬ lters with the lowest absolute weights sum and their corresponding test accuracies on CIFAR-10. (c) Prune and retrain for each single layer of VGG-16 on CIFAR-10. Some layers are sensitive and it can be harder to recover accuracy after pruning them. Relationship to pruning weights Pruning ï¬ lters with low absolute weights sum is similar to pruning low magnitude weights (Han et al. (2015)). Magnitude-based weight pruning may prune away whole ï¬ | 1608.08710#10 | 1608.08710#12 | 1608.08710 | [
"1602.07360"
] |
1608.08710#12 | Pruning Filters for Efficient ConvNets | lters when all the kernel weights of a ï¬ lter are lower than a given threshold. However, it requires a careful tuning of the threshold and it is difï¬ cult to predict the exact number of ï¬ lters that will eventually be pruned. Furthermore, it generates sparse convolutional kernels which can be hard to accelerate given the lack of efï¬ cient sparse libraries, especially for the case of low-sparsity. Relationship to group-sparse regularization on filters Recent work [Wen] (2016)) apply group-sparse regularization ()'" , ||Fi,j|]2 or ¢2,1-norm) on convolutional filters, which also favor to zero-out filters with small /2-norms, i.e. F;,; = 0. In practice, we do not observe noticeable difference between the /2-norm and the ¢;-norm for filter selection, as the important filters tend to have large values for both measures (Appendi . Zeroing out weights of multiple filters during training has a similar effect to pruning filters with the strategy of iterative pruning and retraining as introduced in SectionB.4] 3.2 DETERMINING SINGLE LAYERâ S SENSITIVITY TO PRUNING To understand the sensitivity of each layer, we prune each layer independently and evaluate the resulting pruned networkâ s accuracy on the validation set. Figure 2(b) shows that layers that maintain their accuracy as ï¬ lters are pruned away correspond to layers with larger slopes in Figure 2(a). On the contrary, layers with relatively ï¬ at slopes are more sensitive to pruning. We empirically determine the number of ï¬ lters to prune for each layer based on their sensitivity to pruning. For deep networks such as VGG-16 or ResNets, we observe that layers in the same stage (with the same feature map size) have a similar sensitivity to pruning. To avoid introducing layer-wise meta-parameters, we use the same pruning ratio for all layers in the same stage. For layers that are sensitive to pruning, we prune a smaller percentage of these layers or completely skip pruning them. # 3.3 PRUNING FILTERS ACROSS MULTIPLE LAYERS We now discuss how to prune ï¬ lters across the network. | 1608.08710#11 | 1608.08710#13 | 1608.08710 | [
"1602.07360"
] |
1608.08710#13 | Pruning Filters for Efficient ConvNets | Previous work prunes the weights on a layer by layer basis, followed by iteratively retraining and compensating for any loss of accuracy (Han et al. (2015)). However, understanding how to prune ï¬ lters of multiple layers at once can be useful: 1) For deep networks, pruning and retraining on a layer by layer basis can be extremely time-consuming 2) Pruning layers across the network gives a holistic view of the robustness of the network resulting in a smaller network 3) For complex networks, a holistic approach may be necessary. For example, for the ResNet, pruning the identity feature maps or the second layer of each residual block results in additional pruning of other layers. To prune ï¬ lters across multiple layers, we consider two strategies for layer-wise ï¬ lter selection: | 1608.08710#12 | 1608.08710#14 | 1608.08710 | [
"1602.07360"
] |
1608.08710#14 | Pruning Filters for Efficient ConvNets | 4 Published as a conference paper at ICLR 2017 â ¢ Independent pruning determines which ï¬ lters should be pruned at each layer independent of other layers. â ¢ Greedy pruning accounts for the ï¬ lters that have been removed in the previous layers. This strategy does not consider the kernels for the previously pruned feature maps while calculating the sum of absolute weights. Figure 3 illustrates the difference between two approaches in calculating the sum of absolute weights. The greedy approach, though not globally optimal, is holistic and results in pruned networks with higher accuracy especially when many ï¬ lters are pruned. | 1608.08710#13 | 1608.08710#15 | 1608.08710 | [
"1602.07360"
] |
1608.08710#15 | Pruning Filters for Efficient ConvNets | Xi+] Xi4qo N42 Figure 3: Pruning ï¬ lters across consecutive layers. The independent pruning strategy calculates the ï¬ lter sum (columns marked in green) without considering feature maps removed in previous layer (shown in blue), so the kernel weights marked in yellow are still included. The greedy pruning strategy does not count kernels for the already pruned feature maps. Both approaches result in a (ni+1 â 1) à (ni+2 â 1) kernel matrix. projection shortcut ie Xi U Xi+1 X42 » residual block . P(x) Figure 4: Pruning residual blocks with the projection shortcut. The ï¬ lters to be pruned for the second layer of the residual block (marked as green) are determined by the pruning result of the shortcut projection. The ï¬ rst layer of the residual block can be pruned without restrictions. For simpler CNNs like VGGNet or AlexNet, we can easily prune any of the ï¬ lters in any convolutional layer. However, for complex network architectures such as Residual networks (He et al. (2016)), pruning ï¬ lters may not be straightforward. The architecture of ResNet imposes restrictions and the ï¬ lters need to be pruned carefully. We show the ï¬ lter pruning for residual blocks with projection mapping in Figure 4. Here, the ï¬ lters of the ï¬ | 1608.08710#14 | 1608.08710#16 | 1608.08710 | [
"1602.07360"
] |
1608.08710#16 | Pruning Filters for Efficient ConvNets | rst layer in the residual block can be arbitrarily pruned, as it does not change the number of output feature maps of the block. However, the correspondence between the output feature maps of the second convolutional layer and the identity feature maps makes it difï¬ cult to prune. Hence, to prune the second convolutional layer of the residual block, the corresponding projected feature maps must also be pruned. Since the identical feature maps are more important than the added residual maps, the feature maps to be pruned should be determined by the pruning results of the shortcut layer. To determine which identity feature maps are to be pruned, we use the same selection criterion based on the ï¬ lters of the shortcut convolutional layers (with 1 à 1 kernels). The second layer of the residual block is pruned with the same ï¬ lter index as selected by the pruning of the shortcut layer. # 3.4 RETRAINING PRUNED NETWORKS TO REGAIN ACCURACY After pruning the ï¬ lters, the performance degradation should be compensated by retraining the network. There are two strategies to prune the ï¬ lters across multiple layers: | 1608.08710#15 | 1608.08710#17 | 1608.08710 | [
"1602.07360"
] |
1608.08710#17 | Pruning Filters for Efficient ConvNets | 5 Published as a conference paper at ICLR 2017 1. Prune once and retrain: Prune ï¬ lters of multiple layers at once and retrain them until the original accuracy is restored. 2. Prune and retrain iteratively: Prune ï¬ lters layer by layer or ï¬ lter by ï¬ lter and then retrain iteratively. The model is retrained before pruning the next layer for the weights to adapt to the changes from the pruning process. | 1608.08710#16 | 1608.08710#18 | 1608.08710 | [
"1602.07360"
] |
1608.08710#18 | Pruning Filters for Efficient ConvNets | We ï¬ nd that for the layers that are resilient to pruning, the prune and retrain once strategy can be used to prune away signiï¬ cant portions of the network and any loss in accuracy can be regained by retraining for a short period of time (less than the original training time). However, when some ï¬ lters from the sensitive layers are pruned away or large portions of the networks are pruned away, it may not be possible to recover the original accuracy. Iterative pruning and retraining may yield better results, but the iterative process requires many more epochs especially for very deep networks. | 1608.08710#17 | 1608.08710#19 | 1608.08710 | [
"1602.07360"
] |
1608.08710#19 | Pruning Filters for Efficient ConvNets | # 4 EXPERIMENTS We prune two types of networks: simple CNNs (VGG-16 on CIFAR-10) and Residual networks (ResNet-56/110 on CIFAR-10 and ResNet-34 on ImageNet). Unlike AlexNet or VGG (on ImageNet) that are often used to demonstrate model compression, both VGG (on CIFAR-10) and Residual networks have fewer parameters in the fully connected layers. Hence, pruning a large percentage of parameters from these networks is challenging. | 1608.08710#18 | 1608.08710#20 | 1608.08710 | [
"1602.07360"
] |
1608.08710#20 | Pruning Filters for Efficient ConvNets | We implement our ï¬ lter pruning method in Torch7 (Collobert et al. (2011)). When ï¬ lters are pruned, a new model with fewer ï¬ lters is created and the remaining parameters of the modiï¬ ed layers as well as the unaffected layers are copied into the new model. Furthermore, if a convolutional layer is pruned, the weights of the subsequent batch normalization layer are also removed. To get the baseline accuracies for each network, we train each model from scratch and follow the same pre-processing and hyper-parameters as ResNet (He et al. (2016)). For retraining, we use a constant learning rate 0.001 and retrain 40 epochs for CIFAR-10 and 20 epochs for ImageNet, which represents one-fourth of the original training epochs. | 1608.08710#19 | 1608.08710#21 | 1608.08710 | [
"1602.07360"
] |
1608.08710#21 | Pruning Filters for Efficient ConvNets | Past work has reported up to 3à original training times to retrain pruned networks (Han et al. (2015)). Table 1: Overall results. The best test/validation accuracy during the retraining process is reported. Training a pruned model from scratch performs worse than retraining a pruned model, which may indicate the difï¬ culty of training a network with a small capacity. Model VGG-16 VGG-16-pruned-A VGG-16-pruned-A scratch-train ResNet-56 ResNet-56-pruned-A ResNet-56-pruned-B ResNet-56-pruned-B scratch-train ResNet-110 ResNet-110-pruned-A ResNet-110-pruned-B ResNet-110-pruned-B scratch-train ResNet-34 ResNet-34-pruned-A ResNet-34-pruned-B ResNet-34-pruned-C Error(%) 6.75 6.60 6.88 6.96 6.90 6.94 8.69 6.47 6.45 6.70 7.06 26.77 27.44 27.83 27.52 FLOP 3.13 à 108 2.06 à 108 1.25 à 108 1.12 à 108 9.09 à 107 2.53 à 108 2.13 à 108 1.55 à 108 3.64 à 109 3.08 à 109 2.76 à 109 3.37 à 109 Pruned % Parameters 1.5 à 107 5.4 à 106 34.2% 10.4% 27.6% 8.5 à 105 7.7 à 105 7.3 à 105 15.9% 38.6% 1.72 à 106 1.68 à 106 1.16 à 106 15.5% 24.2% 7.5% 2.16 à 107 1.99 à 107 1.93 à 107 2.01 à 107 64.0% 9.4% 13.7% 2.3% 32.4% 7.6% 10.8% 7.2% | 1608.08710#20 | 1608.08710#22 | 1608.08710 | [
"1602.07360"
] |
1608.08710#22 | Pruning Filters for Efficient ConvNets | # 4.1 VGG-16 ON CIFAR-10 VGG-16 is a high-capacity network originally designed for the ImageNet dataset (Simonyan & Zisserman (2015)). Recently, Zagoruyko (2015) applies a slightly modiï¬ ed version of the model on CIFAR-10 and achieves state of the art results. As shown in Table 2, VGG-16 on CIFAR-10 consists of 13 convolutional layers and 2 fully connected layers, in which the fully connected layers do not occupy large portions of parameters due to the small input size and less hidden units. We use the model described in Zagoruyko (2015) but add Batch Normalization (Ioffe & Szegedy (2015)) | 1608.08710#21 | 1608.08710#23 | 1608.08710 | [
"1602.07360"
] |
1608.08710#23 | Pruning Filters for Efficient ConvNets | 6 Published as a conference paper at ICLR 2017 Table 2: VGG-16 on CIFAR-10 and the pruned model. The last two columns show the number of feature maps and the reduced percentage of FLOP from the pruned model. #Maps 32 64 128 128 256 256 256 256 256 256 256 256 256 512 10 layer type wi à hi 32 à 32 Conv 1 32 à 32 Conv 2 16 à 16 Conv 3 16 à 16 Conv 4 8 à 8 Conv 5 8 à 8 Conv 6 8 à 8 Conv 7 4 à 4 Conv 8 4 à 4 Conv 9 4 à 4 Conv 10 2 à 2 Conv 11 2 à 2 Conv 12 2 à | 1608.08710#22 | 1608.08710#24 | 1608.08710 | [
"1602.07360"
] |
1608.08710#24 | Pruning Filters for Efficient ConvNets | 2 Conv 13 1 Linear Linear 1 Total #Maps 64 64 128 128 256 256 256 512 512 512 512 512 512 512 10 FLOP 1.8E+06 3.8E+07 1.9E+07 3.8E+07 1.9E+07 3.8E+07 3.8E+07 1.9E+07 3.8E+07 3.8E+07 9.4E+06 9.4E+06 9.4E+06 2.6E+05 5.1E+03 3.1E+08 #Params 1.7E+03 3.7E+04 7.4E+04 1.5E+05 2.9E+05 5.9E+05 5.9E+05 1.2E+06 2.4E+06 2.4E+06 2.4E+06 2.4E+06 2.4E+06 2.6E+05 5.1E+03 1.5E+07 FLOP% 50% 50% 0% 0% 0% 0% 0% 50% 75% 75% 75% 75% 75% 50% 0% 34% layer after each convolutional layer and the ï¬ rst linear layer, without using Dropout (Srivastava et al. (2014)). Note that when the last convolutional layer is pruned, the input to the linear layer is changed and the connections are also removed. As shown in Figure 2(b), each of the convolutional layers with 512 feature maps can drop at least 60% of ï¬ lters without affecting the accuracy. Figure 2(c) shows that with retraining, almost 90% of the ï¬ lters of these layers can be safely removed. One possible explanation is that these ï¬ lters operate on 4 à 4 or 2 à 2 feature maps, which may have no meaningful spatial connections in such small dimensions. | 1608.08710#23 | 1608.08710#25 | 1608.08710 | [
"1602.07360"
] |
1608.08710#25 | Pruning Filters for Efficient ConvNets | For instance, ResNets for CIFAR-10 do not perform any convolutions for feature maps below 8 à 8 dimensions. Unlike previous work (Zeiler & Fergus (2014); Han et al. (2015)), we observe that the ï¬ rst layer is robust to pruning as compared to the next few layers. This is possible for a simple dataset like CIFAR-10, on which the model does not learn as much useful ï¬ lters as on ImageNet (as shown in Figure. 5). Even when 80% of the ï¬ lters from the ï¬ rst layer are pruned, the number of remaining ï¬ lters (12) is still larger than the number of raw input channels. However, when removing 80% ï¬ lters from the second layer, the layer corresponds to a 64 to 12 mapping, which may lose signiï¬ cant information from previous layers, thereby hurting the accuracy. With 50% of the ï¬ lters being pruned in layer 1 and from 8 to 13, we achieve 34% FLOP reduction for the same accuracy. Figure 5: Visualization of filters in the first convolutional layer of VGG-16 trained on CIFAR-10. Filters are ranked by ¢;-norm. 4.2 RESNET-56/110 ON CIFAR-10 ResNets for CIFAR-10 have three stages of residual blocks for feature maps with sizes of 32 à 32, 16 à 16 and 8 à 8. Each stage has the same number of residual blocks. When the number of feature maps increases, the shortcut layer provides an identity mapping with an additional zero padding for the increased dimensions. Since there is no projection mapping for choosing the identity feature maps, we only consider pruning the ï¬ rst layer of the residual block. As shown in Figure 6, most of the layers are robust to pruning. For ResNet-110, pruning some single layers without retraining even | 1608.08710#24 | 1608.08710#26 | 1608.08710 | [
"1602.07360"
] |
1608.08710#26 | Pruning Filters for Efficient ConvNets | 7 Published as a conference paper at ICLR 2017 CIFARLO, ResNet-56, prune smallest filters CIFARLO, ResNet-56, prune smallest filters CIFARLO, ResNet-56, prune smallest filters > conv 216 => conv 20 32 S =" |[e= conv 38 64 EF Je conva0 64 Z| conva2 64}, " Me conv aa 64| ©-© conv 1016 © conv.2832 e2 conv_46 64}, e+ conv.12 16 i e+ conv_30 32 e+ conv_43 64 90}] e© conv.14 16 : 90}| © cony_32 32 90}] e-© conv 064 © conv.1616 â 2 conv 3432 2 conv 5264 2 conv_1816 ' 2 conv 3632 2 conv 5464 5 7 ry cy 6 Too 7 ry cy % Too 7 ry cy % Too Fiters Prune away) Fiters Prune away) Fiters Prune away) Pe CIFARLO, ResNet-110, prune smallest filters Pe CIFARLO, ResNet-110, prune smallest filters Pe CIFAR1O, ResNet-110, prune smallest filters e* conv 38 32 cony_40 32 cony_46 32 cony_48 32 conv 10 16 conv 12 16 2 conv1a16|| > conv.5032|] 3 5 convasis|| = conv.s232|| © eu conv1816|| 2 \ convi5432|| 2 conv_20 16 conv. 56 32 conv 24 16 cony_26 16 cony_28 16 13016 13216 [yp o_o cony_34 16 Fiters Prunea Awayis) | © * conv_36 16 conv_60 32 conv_62 32 conv 64 32 v6 32 V6B32H4o conv_70 32 # conv 72.32 c z io Fikes Praned Away(%)| © ® conv_106 64 # conv 108 64 c Fiters Praned Away(%) | © | 1608.08710#25 | 1608.08710#27 | 1608.08710 | [
"1602.07360"
] |
1608.08710#27 | Pruning Filters for Efficient ConvNets | CIFARLO, ResNet-56, prune smallest filters > conv 216 ©-© conv 1016 e+ conv.12 16 i 90}] e© conv.14 16 : © conv.1616 â 2 conv_1816 ' 5 7 ry cy 6 Too Fiters Prune away) CIFARLO, ResNet-56, prune smallest filters => conv 20 32 S " © conv.2832 e+ conv_30 32 90}| © cony_32 32 2 conv 3432 2 conv 3632 7 ry cy % Too Fiters Prune away) CIFARLO, ResNet-56, prune smallest filters =" |[e= conv 38 64 EF Je conva0 64 Z| conva2 64}, Me conv aa 64| e2 conv_46 64}, e+ conv_43 64 90}] e-© conv 064 2 conv 5264 2 conv 5464 7 ry cy % Too Fiters Prune away) Pe CIFARLO, ResNet-110, prune smallest filters conv 10 16 conv 12 16 2 conv1a16|| 5 convasis|| eu conv1816|| conv_20 16 conv 24 16 cony_26 16 cony_28 16 13016 13216 [yp cony_34 16 * conv_36 16 c Fiters Praned Away(%) | © Pe CIFARLO, ResNet-110, prune smallest filters e* conv 38 32 cony_40 32 cony_46 32 cony_48 32 > conv.5032|] = conv.s232|| 2 \ convi5432|| conv. 56 32 o_o Fiters Prunea Awayis) | © conv_60 32 conv_62 32 conv 64 32 v6 32 V6B32H4o conv_70 32 # conv 72.32 Pe CIFAR1O, ResNet-110, prune smallest filters 3 © 2 c z io Fikes Praned Away(%)| © ® conv_106 64 # conv 108 64 | 1608.08710#26 | 1608.08710#28 | 1608.08710 | [
"1602.07360"
] |
1608.08710#28 | Pruning Filters for Efficient ConvNets | Figure 6: Sensitivity to pruning for the ï¬ rst layer of each residual block of ResNet-56/110. improves the performance. In addition, we ï¬ nd that layers that are sensitive to pruning (layers 20, 38 and 54 for ResNet-56, layer 36, 38 and 74 for ResNet-110) lie at the residual blocks close to the layers where the number of feature maps changes, e.g., the ï¬ rst and the last residual blocks for each stage. We believe this happens because the precise residual errors are necessary for the newly added empty feature maps. The retraining performance can be improved by skipping these sensitive layers. As shown in Table 1, ResNet-56-pruned-A improves the performance by pruning 10% ï¬ lters while skipping the sensitive layers 16, 20, 38 and 54. In addition, we ï¬ nd that deeper layers are more sensitive to pruning than layers in the earlier stages of the network. Hence, we use a different pruning rate for each stage. We use pi to denote the pruning rate for layers in the ith stage. ResNet-56-pruned-B skips more layers (16, 18, 20, 34, 38, 54) and prunes layers with p1=60%, p2=30% and p3=10%. For ResNet-110, the ï¬ rst pruned model gets a slightly better result with p1=50% and layer 36 skipped. ResNet-110-pruned-B skips layers 36, 38, 74 and prunes with p1=50%, p2=40% and p3=30%. When there are more than two residual blocks at each stage, the middle residual blocks may be redundant and can be easily pruned. This might explain why ResNet-110 is easier to prune than ResNet-56. 4.3 RESNET-34 ON ILSVRC2012 ResNets for ImageNet have four stages of residual blocks for feature maps with sizes of 56 à | 1608.08710#27 | 1608.08710#29 | 1608.08710 | [
"1602.07360"
] |
1608.08710#29 | Pruning Filters for Efficient ConvNets | 56, 28 à 28, 14 à 14 and 7 à 7. ResNet-34 uses the projection shortcut when the feature maps are down-sampled. We ï¬ rst prune the ï¬ rst layer of each residual block. Figure 7 shows the sensitivity of the ï¬ rst layer of each residual block. Similar to ResNet-56/110, the ï¬ rst and the last residual blocks of each stage are more sensitive to pruning than the intermediate blocks (i.e., layers 2, 8, 14, 16, 26, 28, 30, 32). We skip those layers and prune the remaining layers at each stage equally. | 1608.08710#28 | 1608.08710#30 | 1608.08710 | [
"1602.07360"
] |
1608.08710#30 | Pruning Filters for Efficient ConvNets | In Table 1 we compare two conï¬ gurations of pruning percentages for the ï¬ rst three stages: (A) p1=30%, p2=30%, p3=30%; (B) p1=50%, p2=60%, p3=40%. Option-B provides 24% FLOP reduction with about 1% loss in accuracy. As seen in the pruning results for ResNet-50/110, we can predict that ResNet-34 is relatively more difï¬ cult to prune as compared to deeper ResNets. We also prune the identity shortcuts and the second convolutional layer of the residual blocks. As these layers have the same number of ï¬ lters, they are pruned equally. As shown in Figure 7(b), these layers are more sensitive to pruning than the ï¬ rst layers. With retraining, ResNet-34-pruned-C prunes the third stage with p3=20% and results in 7.5% FLOP reduction with 0.75% loss in accuracy. Therefore, pruning the ï¬ rst layer of the residual block is more effective at reducing the overall FLOP | 1608.08710#29 | 1608.08710#31 | 1608.08710 | [
"1602.07360"
] |
1608.08710#31 | Pruning Filters for Efficient ConvNets | 8 Published as a conference paper at ICLR 2017 8 ImageNet, ResNet-34, prune smallest filters conv_2 64 conv_4 64 70 conv_6 64 conv_8 128 conv_10 128 conv_12 128 conv_14 128 conv_16 256 conv_18 256 conv_20 256 conv_22 256 conv_24 256 conv_26 256 conv_28 512 conv_30 512 0 20 40 60 30 * conv_32 512 [4p Filters Pruned Away(%) â | 1608.08710#30 | 1608.08710#32 | 1608.08710 | [
"1602.07360"
] |
1608.08710#32 | Pruning Filters for Efficient ConvNets | Accuracy 55 (a) Pruning the ï¬ rst layer of residual blocks (b) Pruning the second layer of residual blocks ImageNet, ResNet-34, prune the second layer of the basicblock 70 o* 1-7, step=2 ee 9-15, step=2 60 © 17-27, step=2 e+ 29 - 33, step=2 a °o 20 40 Cr) Too Parameter Pruned Away(%) Test Accuracy Figure 7: Sensitivity to pruning for the residual blocks of ResNet-34. than pruning the second layer. | 1608.08710#31 | 1608.08710#33 | 1608.08710 | [
"1602.07360"
] |
1608.08710#33 | Pruning Filters for Efficient ConvNets | This ï¬ nding also correlates with the bottleneck block design for deeper ResNets, which ï¬ rst reduces the dimension of input feature maps for the residual layer and then increases the dimension to match the identity mapping. # 4.4 COMPARISON WITH PRUNING RANDOM FILTERS AND LARGEST FILTERS We compare our approach with pruning random filters and largest filters. As shown in Figure [8] pruning the smallest filters outperforms pruning random filters for most of the layers at different pruning ratios. For example, smallest filter pruning has better accuracy than random filter pruning for all layers with the pruning ratio of 90%. The accuracy of pruning filters with the largest ¢;-norms drops quickly as the pruning ratio increases, which indicates the importance of filters with larger ¢,-norms. 100 GIFAR10, VGG-16, prune fiters with smallest f-norm ot CIFAR10, VGG-16, prune random filters CIFAR1O, VGG-16, prune fiters with largest /,-norm = con 166 es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 pecuracy 0 Ea Co Too 0 3 Cy Too 0 w Co a0 % a0 % a0 % Fits Pruned Awayit) Fites Pred Awayit) Fits Pruned Awayit) 100 GIFAR10, VGG-16, prune fiters with smallest f-norm = con 166 es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 0 Ea Co Too a0 % Fits Pruned Awayit) ot CIFAR10, VGG-16, prune random filters pecuracy 0 3 Cy Too a0 % Fites Pred Awayit) | 1608.08710#32 | 1608.08710#34 | 1608.08710 | [
"1602.07360"
] |
1608.08710#34 | Pruning Filters for Efficient ConvNets | CIFAR1O, VGG-16, prune fiters with largest /,-norm 0 w Co Too a0 % Fits Pruned Awayit) Figure 8: Comparison of three pruning methods for VGG-16 on CIFAR-10: pruning the smallest ï¬ lters, pruning random ï¬ lters and pruning the largest ï¬ lters. In random ï¬ lter pruning, the order of ï¬ lters to be pruned is randomly permuted. # 4.5 COMPARISON WITH ACTIVATION-BASED FEATURE MAP PRUNING The activation-based feature map pruning method removes the feature maps with weak activation patterns and their corresponding filters and kernels (Polyak & Wol! )), which needs sample data as input to determine which feature maps to prune. A feature map x;41,; â | 1608.08710#33 | 1608.08710#35 | 1608.08710 | [
"1602.07360"
] |
1608.08710#35 | Pruning Filters for Efficient ConvNets | ¬ Râ ¢+!*"+1 is generated by applying filter F;,; â ¬ R"**â ¢* to feature maps of previous layer x; â ¬ Râ ¢*â '*", ie., Xi41,j = Fi,j * Xi. Given N randomly selected images {x'}}_, from the training set, the statistics of each feature map can be estimated with one epoch forward pass of the N sampled data. Note that we calculate statistics on the feature maps generated from the convolution operations before batch normalization or non-linear activation. We compare our ¢;-norm based filter pruning with feature map pruning using the following criteria: Omean-mean(Xi,j) = + a mean(x?;), Onean-sta(Xij) = Fe hr St d(KM,)s Smean-ts (Kij) = FH Der (XP [la> Gmeanee (Kg) = WH Doras IPxjlle and | 1608.08710#34 | 1608.08710#36 | 1608.08710 | [
"1602.07360"
] |
1608.08710#36 | Pruning Filters for Efficient ConvNets | 9 Published as a conference paper at ICLR 2017 100 GIFAR10, VGG-16, prune fiters with smallest f-norm yoo CIFAR10, VGG-16, prune feature maps with smallest uns may 1300 CIFARLO. VGG-16. prune feature maps with smallest â eol|*> conv 64] e+ conv2 64 => conv. 68 es conv.2 64 + conv.3 128 + conv.4 128 ee conv.s 256 |\ + conv6 256 | ° e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 oo|[e* conv 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 8 + conv.3 128 + conv.4 128 > lee conv 5 256 £ |[e-* conv_6 256 | |le-e conv_7 256 8 pecurecy 8 8 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, 20) ee conv11512 © conv.12512 © conv.12512 © conv.13512 © conv.13512 0 3 Too 0 Ea Too 0 3 % % % Fiters Pruned Awayi%) Pruned Awayis) ned wayi%) (a) ||Fi,glla (b) Omean-mean (C) Omean-sta CIFARIO, VGG-16, prune feature maps with smallest run CCIFARIO, VGG-16, prune feature maps with smallest ie CIFAR1O, VGG-16, prune feature maps with smallest ov 109, 109, oo|[e* cont 6m es conv.2 64 = conl 6a es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 oo|[e* conv 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 8 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 pecurecy 8 pecurecy 8 8 os coma 12 oo comr9 512 oo comr9 512 o 3 coneiosi2 o 3 coneiosi2 \ o 3 coneiosi2 20S conv 5i2 20S conv 5i2 20S conv 5i2 o 3 comei2si2 : o 3 comei2si2 o 3 comei2si2 oo comet3 512 oo comet3 512 oo comet3 512 % 20 60 100 % 20 60 100 % 20 60 (d) Onean-ey (â | 1608.08710#35 | 1608.08710#37 | 1608.08710 | [
"1602.07360"
] |
1608.08710#37 | Pruning Filters for Efficient ConvNets | ¬) Omean-â ¬2 (f) Ovar-ts 100 GIFAR10, VGG-16, prune fiters with smallest f-norm â eol|*> conv 64] e+ conv2 64 + conv.3 128 + conv.4 128 > lee conv 5 256 £ |[e-* conv_6 256 | |le-e conv_7 256 8 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 0 3 Too % Fiters Pruned Awayi%) yoo CIFAR10, VGG-16, prune feature maps with smallest uns may => conv. 68 es conv.2 64 + conv.3 128 + conv.4 128 ee conv.s 256 |\ + conv6 256 | ° e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 8 pecurecy 8 20) ee conv11512 © conv.12512 © conv.13512 0 Ea Too % Pruned Awayis) 1300 CIFARLO. VGG-16. prune feature maps with smallest oo|[e* conv 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 © conv.9 512 © conv.10512 © conv.11512, © conv.12512 © conv.13512 8 0 3 % ned wayi%) CIFARIO, VGG-16, prune feature maps with smallest run ie oo|[e* cont 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 pecurecy 8 os coma 12 o 3 coneiosi2 20S conv 5i2 o 3 comei2si2 : oo comet3 512 % 20 60 100 | 1608.08710#36 | 1608.08710#38 | 1608.08710 | [
"1602.07360"
] |
1608.08710#38 | Pruning Filters for Efficient ConvNets | CCIFARIO, VGG-16, prune feature maps with smallest 109, = conl 6a es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 8 pecurecy 8 oo comr9 512 o 3 coneiosi2 \ 20S conv 5i2 o 3 comei2si2 oo comet3 512 % 20 60 100 CIFAR1O, VGG-16, prune feature maps with smallest ov 109, oo|[e* conv 6m es conv.2 64 + conv.3 128 + conv.4 128 © conv 5 256 2 conv 6 256 e* conv.7 256 © conv.8 512 8 oo comr9 512 o 3 coneiosi2 20S conv 5i2 o 3 comei2si2 oo comet3 512 % 20 60 Figure 9: Comparison of activation-based feature map pruning for VGG-16 on CIFAR-10. | 1608.08710#37 | 1608.08710#39 | 1608.08710 | [
"1602.07360"
] |
1608.08710#39 | Pruning Filters for Efficient ConvNets | Ovar-to (i,j) = var({||x?;l]2}NL1), where mean, std and var are standard statistics (average, standard deviation and variance) of the input. Here, o.,2+-¢, 18 the contribution variance of channel criterion proposed in (2015), which is motivated by the intuition that an unimportant feature map has almost similar outputs for the whole training data and acts like an additional bias. The estimation of the criteria becomes more accurate when more sample data is used. Here we use the whole training set (NV = 50,000 for CIFAR-10) to compute the statistics. The performance of feature map pruning with above criteria for each layer is shown in Figure[9] Smallest filter pruning outperforms feature map pruning with the criteria Onean-means Smeanâ l;> Tmeanâ ly ANd Oyar-¢,. The Omean-sta Criterion has better or similar performance to ¢;-norm up to pruning ratio of 60%. However, its performance drops quickly after that especially for layers of conv_1, conv_2 and conv_3. We find £-norm is a good heuristic for filter selection considering that it is data free. | 1608.08710#38 | 1608.08710#40 | 1608.08710 | [
"1602.07360"
] |
1608.08710#40 | Pruning Filters for Efficient ConvNets | # 5 CONCLUSIONS Modern CNNs often have high capacity with large training and inference costs. In this paper we present a method to prune ï¬ lters with relatively low weight magnitudes to produce CNNs with reduced computation costs without introducing irregular sparsity. It achieves about 30% reduction in FLOP for VGGNet (on CIFAR-10) and deep ResNets without signiï¬ cant loss in the original accuracy. Instead of pruning with speciï¬ c layer-wise hayperparameters and time-consuming iterative retraining, we use the one-shot pruning and retraining strategy for simplicity and ease of implementation. By performing lesion studies on very deep CNNs, we identify layers that are robust or sensitive to pruning, which can be useful for further understanding and improving the architectures. | 1608.08710#39 | 1608.08710#41 | 1608.08710 | [
"1602.07360"
] |
1608.08710#41 | Pruning Filters for Efficient ConvNets | # ACKNOWLEDGMENTS The authors would like to thank the anonymous reviewers for their valuable feedback. # REFERENCES Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Structured Pruning of Deep Convolutional Neural Networks. arXiv preprint arXiv:1512.08571, 2015. 10 Published as a conference paper at ICLR 2017 Ronan Collobert, Koray Kavukcuoglu, and Cl´ement Farabet. Torch7: | 1608.08710#40 | 1608.08710#42 | 1608.08710 | [
"1602.07360"
] |
1608.08710#42 | Pruning Filters for Efficient ConvNets | A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011. Matthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016. Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning. In NIPS, 2013. | 1608.08710#41 | 1608.08710#43 | 1608.08710 | [
"1602.07360"
] |
1608.08710#43 | Pruning Filters for Efficient ConvNets | Song Han, Jeff Pool, John Tran, and William Dally. Learning both Weights and Connections for Efï¬ cient Neural Network. In NIPS, 2015. Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. EIE: Efï¬ cient Inference Engine on Compressed Deep Neural Network. In ISCA, 2016a. Song Han, Huizi Mao, and William J Dally. | 1608.08710#42 | 1608.08710#44 | 1608.08710 | [
"1602.07360"
] |
1608.08710#44 | Pruning Filters for Efficient ConvNets | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. In ICLR, 2016b. Babak Hassibi and David G Stork. Second Order Derivatives for Network Pruning: Optimal Brain Surgeon. In NIPS, 1993. Kaiming He and Jian Sun. Convolutional Neural Networks at Constrained Time Cost. In CVPR, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016. | 1608.08710#43 | 1608.08710#45 | 1608.08710 | [
"1602.07360"
] |
1608.08710#45 | Pruning Filters for Efficient ConvNets | Forrest Iandola, Matthew Moskewicz, Khalidand Ashraf, Song Han, William Dally, and Keutzer Kurt. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and ¡ 1MB model size. arXiv preprint arXiv:1602.07360, 2016. Yani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Training CNNs with Low-Rank Filters for Efï¬ cient Image Classiï¬ cation. In ICLR, 2016. Sergey Ioffe and Christian Szegedy. | 1608.08710#44 | 1608.08710#46 | 1608.08710 | [
"1602.07360"
] |
1608.08710#46 | Pruning Filters for Efficient ConvNets | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. 2015. Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. In BMVC, 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet Classiï¬ cation with Deep Convo- lutional Neural Networks. In NIPS, 2012. | 1608.08710#45 | 1608.08710#47 | 1608.08710 | [
"1602.07360"
] |
1608.08710#47 | Pruning Filters for Efficient ConvNets | Andrew Lavin and Scott Gray. Fast Algorithms for Convolutional Neural Networks. In CVPR, 2016. Yann Le Cun, John S Denker, and Sara A Solla. Optimal Brain Damage. In NIPS, 1989. Vadim Lebedev and Victor Lempitsky. Fast Convnets Using Group-wise Brain Damage. In CVPR, 2016. Min Lin, Qiang Chen, and Shuicheng Yan. Network in Network. arXiv preprint arXiv:1312.4400, 2013. Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse Convolu- tional Neural Networks. In CVPR, 2015. | 1608.08710#46 | 1608.08710#48 | 1608.08710 | [
"1602.07360"
] |
1608.08710#48 | Pruning Filters for Efficient ConvNets | Zelda Mariet and Suvrit Sra. Diversity Networks. In ICLR, 2016. Michael Mathieu, Mikael Henaff, and Yann LeCun. Fast Training of Convolutional Networks through FFTs. arXiv preprint arXiv:1312.5851, 2013. Adam Polyak and Lior Wolf. Channel-Level Acceleration of Deep Face Representations. IEEE Access, 2015. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: ImageNet Classiï¬ cation Using Binary Convolutional Neural Networks. In ECCV, 2016. | 1608.08710#47 | 1608.08710#49 | 1608.08710 | [
"1602.07360"
] |
1608.08710#49 | Pruning Filters for Efficient ConvNets | 11 Published as a conference paper at ICLR 2017 Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015. Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR, 2015. | 1608.08710#48 | 1608.08710#50 | 1608.08710 | [
"1602.07360"
] |
1608.08710#50 | Pruning Filters for Efficient ConvNets | Suraj Srinivas and R Venkatesh Babu. Data-free Parameter Pruning for Deep Neural Networks. In BMVC, 2015. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overï¬ tting. JMLR, 2014. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going Deeper with Convolutions. In CVPR, 2015a. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethink- ing the Inception Architecture for Computer Vision. arXiv preprint arXiv:1512.00567, 2015b. Cheng Tai, Tong Xiao, Xiaogang Wang, and Weinan E. Convolutional neural networks with low-rank regularization. In ICLR, 2016. Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning Structured Sparsity in Deep Learning. In NIPS, 2016. Sergey Zagoruyko. 92.45% on CIFAR-10 in Torch. http://torch.ch/blog/2015/07/30/ cifar.html, 2015. Matthew D Zeiler and Rob Fergus. Visualizing and Understanding Convolutional Networks. In ECCV, 2014. Xiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating Very Deep Convolutional Networks for Classiï¬ cation and Detection. IEEE T-PAMI, 2015a. Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. | 1608.08710#49 | 1608.08710#51 | 1608.08710 | [
"1602.07360"
] |
1608.08710#51 | Pruning Filters for Efficient ConvNets | Efï¬ cient and accurate approximations of nonlinear convolutional networks. In CVPR, 2015b. Hao Zhou, Jose Alvarez, and Fatih Porikli. Less Is More: Towards Compact CNNs. In ECCV, 2016. 12 Published as a conference paper at ICLR 2017 6 APPENDIX 6.1 COMPARISON WITH £2-NORM BASED FILTER PRUNING We compare ¢;-norm with £-norm for filter pruning. As shown in Figure[10] £,-norm works slightly better than ¢j-norm for layer conv_2. There is no significant difference between the two norms for other layers. CIFAR10, VGG-16, prune filters with smallest f-norm CIFAR10, VGG-16, prune filters with smallest fy-norm 109, 109, + conv_164 + conv_2 64 + conv_3 128 + conv_4128 ee conv_5 256 e* conv_6 256 ee conv_7 256 © -* conv_8 512 © -* conv_9 512 © conv_10512 © conv_11512 © -© conv_12 512 © -* conv_13 512 80 + conv_2 64 + conv_3 128 + conv_4128 ee conv_5 256 e* conv_6 256 ee conv_7 256 © -* conv_8 512 © -* conv_9 512 © conv_10512 © conv_11512 © -© conv_12 512 © -* conv_13 512 60 Accuracy Accuracy 20 0 20 a0 60 30 100 0 20 a0 60 30 100 Filters Pruned Away(94) Filters Pruned Away(%) (a) ||Faslla (b) ||Fi,sll2 | 1608.08710#50 | 1608.08710#52 | 1608.08710 | [
"1602.07360"
] |
1608.08710#52 | Pruning Filters for Efficient ConvNets | CIFAR10, VGG-16, prune filters with smallest f-norm 109, + conv_164 + conv_2 64 + conv_3 128 + conv_4128 ee conv_5 256 e* conv_6 256 ee conv_7 256 © -* conv_8 512 © -* conv_9 512 © conv_10512 © conv_11512 © -© conv_12 512 © -* conv_13 512 80 60 Accuracy 20 0 20 a0 60 30 100 Filters Pruned Away(94) CIFAR10, VGG-16, prune filters with smallest fy-norm 109, + conv_2 64 + conv_3 128 + conv_4128 ee conv_5 256 e* conv_6 256 ee conv_7 256 © -* conv_8 512 © -* conv_9 512 © conv_10512 © conv_11512 © -© conv_12 512 © -* conv_13 512 Accuracy 0 20 a0 60 30 100 Filters Pruned Away(%) Figure 10: Comparison of ¢;-norm and ¢3-norm based filter pruning for VGG-16 on CIFAR-10. 6.2 FLOP AND WALL-CLOCK TIME FLOP is a commonly used measure to compare the computation complexities of CNNs. It is easy to compute and can be done statically, which is independent of the underlying hardware and software implementations. | 1608.08710#51 | 1608.08710#53 | 1608.08710 | [
"1602.07360"
] |
1608.08710#53 | Pruning Filters for Efficient ConvNets | Since we physically prune the ï¬ lters by creating a smaller model and then copy the weights, there are no masks or sparsity introduced to the original dense BLAS operations. Therefore the FLOP and wall-clock time of the pruned model is the same as creating a model with smaller number of ï¬ lters from scratch. We report the inference time of the original model and the pruned model on the test set of CIFAR-10 and the validation set of ILSVRC 2012, which contains 10,000 32 à 32 images and 50,000 224 à 224 images respectively. The ILSVRC 2012 dataset is used only for ResNet-34. The evaluation is conducted in Torch7 with Titan X (Pascal) GPU and cuDNN v5.1, using a mini-batch size 128. As shown in Table 3, the saved inference time is close to the FLOP reduction. Note that the FLOP number only considers the operations in the Conv and FC layers, while some calculations such as Batch Normalization and other overheads are not accounted. # Table 3: The reduction of FLOP and wall-clock time for inference. FLOP Model 3.13 à 108 VGG-16 2.06 à 108 VGG-16-pruned-A 1.25 à 108 ResNet-56 9.09 à 107 ResNet-56-pruned-B 2.53 à 108 ResNet-110 ResNet-110-pruned-B 1.55 à 108 3.64 à 109 ResNet-34 2.76 à 109 ResNet-34-pruned-B Pruned % Time (s) 34.2% 27.6% 38.6% 24.2% 1.23 0.73 1.31 0.99 2.38 1.86 36.02 22.93 Saved % 40.7% 24.4% 21.8% 28.0% | 1608.08710#52 | 1608.08710#54 | 1608.08710 | [
"1602.07360"
] |
1608.08710#54 | Pruning Filters for Efficient ConvNets | 13 | 1608.08710#53 | 1608.08710 | [
"1602.07360"
] |
|
1608.08614#0 | What makes ImageNet good for transfer learning? | 6 1 0 2 c e D 0 1 ] V C . s c [ 2 v 4 1 6 8 0 . 8 0 6 1 : v i X r a # What makes ImageNet good for transfer learning? # Pulkit Agrawal Berkeley Artiï¬ cial Intelligence Research (BAIR) Laboratory UC Berkeley {minyoung,pulkitag,aaefros}@berkeley.edu # Abstract The tremendous success of ImageNet-trained deep fea- tures on a wide range of transfer tasks raises the question: what is it about the ImageNet dataset that makes the learnt features as good as they are? This work provides an em- pirical investigation into the various facets of this question, such as, looking at the importance of the amount of exam- ples, number of classes, balance between images-per-class and classes, and the role of ï¬ ne and coarse grained recog- nition. We pre-train CNN features on various subsets of the ImageNet dataset and evaluate transfer performance on a variety of standard vision tasks. | 1608.08614#1 | 1608.08614 | [
"1507.06550"
] |
|
1608.08614#1 | What makes ImageNet good for transfer learning? | Our overall ï¬ ndings sug- gest that most changes in the choice of pre-training data long thought to be critical, do not signiï¬ cantly affect trans- fer performance. # 1. Introduction the dataset (1.2 million labeled images) that forces the rep- resentation to be general. Others argue that it is the large number of distinct object classes (1000), which forces the network to learn a hierarchy of generalizable features. Yet others believe that the secret sauce is not just the large num- ber of classes, but the fact that many of these classes are visually similar (e.g. many different breeds of dogs), turn- ing this into a ï¬ ne-grained recognition task and pushing the representation to â work harderâ . | 1608.08614#0 | 1608.08614#2 | 1608.08614 | [
"1507.06550"
] |
1608.08614#2 | What makes ImageNet good for transfer learning? | But, while almost every- one in computer vision seems to have their own opinion on this hot topic, little empirical evidence has been produced so far. In this work, we systematically investigate which as- pects of the ImageNet task are most critical for learning good general-purpose features. We evaluate the features by ï¬ ne-tuning on three tasks: object detection on PASCAL- VOC 2007 dataset (PASCAL-DET), action classiï¬ cation on PASCAL-VOC 2012 dataset (PASCAL-ACT-CLS) and scene classiï¬ cation on the SUN dataset (SUN-CLS); see Section 3 for more details. It has become increasingly common within the com- puter vision community to treat image classiï¬ cation on Im- ageNet [35] not as an end in itself, but rather as a â pre- text taskâ | 1608.08614#1 | 1608.08614#3 | 1608.08614 | [
"1507.06550"
] |
1608.08614#3 | What makes ImageNet good for transfer learning? | for training deep convolutional neural networks (CNNs [25, 22]) to learn good general-purpose features. This practice of ï¬ rst training a CNN to perform image clas- siï¬ cation on ImageNet (i.e. pre-training) and then adapting these features for a new target task (i.e. ï¬ ne-tuning) has be- come the de facto standard for solving a wide range of com- puter vision problems. Using ImageNet pre-trained CNN features, impressive results have been obtained on several image classiï¬ cation datasets [10, 33], as well as object de- tection [12, 37], action recognition [38], human pose esti- mation [6], image segmentation [7], optical ï¬ ow [42], im- age captioning [9, 19] and others [24]. Given the success of ImageNet pre-trained CNN fea- tures, it is only natural to ask: what is it about the ImageNet dataset that makes the learnt features as good as they are? One school of thought believes that it is the sheer size of The paper is organized as a set of experiments answering a list of key questions about feature learning with ImageNet. The following is a summary of our main ï¬ ndings: 1. | 1608.08614#2 | 1608.08614#4 | 1608.08614 | [
"1507.06550"
] |
1608.08614#4 | What makes ImageNet good for transfer learning? | How many pre-training ImageNet examples are sufï¬ cient for transfer learning? Pre-training with only half the Im- ageNet data (500 images per class instead of 1000) results in only a small drop in transfer learning performance (1.5 mAP drop on PASCAL-DET). This drop is much smaller than the drop on the ImageNet classiï¬ cation task itself. See Section 4 and Figure 1 for details. 2. How many pre-training ImageNet classes are sufï¬ cient for transfer learning? Pre-training with an order of mag- nitude fewer classes (127 classes instead of 1000) results in only a small drop in transfer learning performance (2.8 mAP drop on PASCAL-DET). Curiously, we also found that for some transfer tasks, pre-training with fewer classes leads to better performance. | 1608.08614#3 | 1608.08614#5 | 1608.08614 | [
"1507.06550"
] |
1608.08614#5 | What makes ImageNet good for transfer learning? | See Section 5.1 and Figure 2 for details. 1 S a x 2 0.6 0.6 9 & < 3 05 05 2 2 c & 2 2 04 04 2 E â ficati % = -® SUN - Classification & > 03 -@ PASCAL - Object Detection 9-3 2% © - - 5 02 =® PASCAL - Action Recognition 02 5 g -@ |mageNet - Classification Z < 4 01 01 6 piu] ov 3 0 200 400 600 800 1000 2 Number of Pretraining Images Per ImageNet Class Figure 1: Change in transfer task performance of a CNN pre-trained with varying number of images per ImageNet class. The left y-axis is the mean class accuracy used for SUN and ImageNet CLS. The right y-axis measures mAP for PASCAL DET and ACTION-CLS. The number of examples per class are reduced by random sam- pling. Accuracy on the ImageNet classiï¬ cation task increases faster as compared to performance on transfer tasks. 3. How important is ï¬ | 1608.08614#4 | 1608.08614#6 | 1608.08614 | [
"1507.06550"
] |
1608.08614#6 | What makes ImageNet good for transfer learning? | ne-grained recognition for learning good features for transfer learning? Features pre-trained with a subset of ImageNet classes that do not require ï¬ ne- grained discrimination still demonstrate good transfer per- formance. See Section 5.2 and Figure 2 for details. 4. Does pre-training on coarse classes produce features ca- pable of ï¬ ne-grained recognition (and vice versa) on Ima- geNet itself? We found that a CNN trained to classify only between the 127 coarse ImageNet classes produces fea- tures capable of telling apart ï¬ ne-grained ImageNet classes whose labels it has never seen in training (section 5.3). Likewise, a CNN trained to classify the 1000 ImageNet classes is able to distinguish between unseen coarse-level classes higher up in the WordNet hierarchy (section 5.4). | 1608.08614#5 | 1608.08614#7 | 1608.08614 | [
"1507.06550"
] |
1608.08614#7 | What makes ImageNet good for transfer learning? | 5. Given the same budget of pre-training images, should we have more classes or more images per class? Training with fewer classes but more images per class performs slightly better at transfer tasks than training with more classes but fewer images per class. See Section 5.5 and Table 2 for details. 6. Is more data always helpful? We found that training with 771 ImageNet classes (out of 1000) that exclude all PAS- CAL VOC classes, achieves nearly the same performance on PASCAL-DET as training on complete ImageNet. | 1608.08614#6 | 1608.08614#8 | 1608.08614 | [
"1507.06550"
] |
1608.08614#8 | What makes ImageNet good for transfer learning? | Fur- ther experiments conï¬ rm that blindly adding more training data does not always lead to better performance and can sometimes hurt performance. See Section 6, and Table 9 for more details. 2 0.6 0.5 0.4 Class Accuracy ( ImageNet & SUN ) Mean Average Precision ( PASCAL ) 0.3 = SUN - Classification 0.3 =@ PASCAL - Object Detection 0.2 -® PASCAL - Action Recognition 9-2 0.1 -@ |mageNet - Classification 0.1 0 200 400 600 800 1000 Number of Pretraining ImageNet Classes Figure 2: Change in transfer task performance with varying number of pre-training ImageNet classes. The number of ImageNet classes are varied using the technique described in Section 5.1. With only 486 pre-training classes, transfer performances are unaffected and only a small drop is observed when only 79 classes are used for pre- training. The ImageNet classiï¬ cation performance is measured by ï¬ ntetuning the last layer to the original 1000-way classiï¬ cation. # 2. Related Work | 1608.08614#7 | 1608.08614#9 | 1608.08614 | [
"1507.06550"
] |
1608.08614#9 | What makes ImageNet good for transfer learning? | A number of papers have studied transfer learning in CNNs, including the various factors that affect pre-training and ï¬ ne-tuning. For example, the question of whether pre- training should be terminated early to prevent over-ï¬ tting and what layers should be used for transfer learning was studied by [2, 44]. A thorough investigation of good archi- tectural choices for transfer learning was conducted by [3], while [26] propose an approach to ï¬ ne-tuning for new tasks without â forgettingâ the old ones. In contrast to these works, we use a ï¬ xed ï¬ ne-tuning pr | 1608.08614#8 | 1608.08614#10 | 1608.08614 | [
"1507.06550"
] |
1608.08614#10 | What makes ImageNet good for transfer learning? | One central downside of supervised pre-training is that large quantity of expensive manually-supervised training data is required. The possibility of using large amounts of unlabelled data for feature learning has therefore been very attractive. Numerous methods for learning features by optimizing some auxiliary criterion of the data itself have been proposed. The most well-known such criteria are image reconstruction [5, 36, 29, 27, 32, 20] (see [4] for a comprehensive overview) and feature slowness [43, 14]. Unfortunately, features learned using these methods turned out not to be competitive with those obtained from super- vised ImageNet pre-training [31]. To try and force better feature generalization, more recent â self-supervisedâ meth- ods use more difï¬ cult data prediction auxiliary tasks in an effort to make the CNNs â work harderâ . | 1608.08614#9 | 1608.08614#11 | 1608.08614 | [
"1507.06550"
] |
1608.08614#11 | What makes ImageNet good for transfer learning? | Attempted self- supervised tasks include predictions of ego-motion [1, 16], spatial context [8, 31, 28], temporal context [41], and even color [45, 23] and sound [30]. While features learned using these methods often come close to ImageNet performance, to date, none have been able to beat it. ) I ar Label set 1 Original label set Label set 2 Figure 3: An illustration of the bottom up procedure used to con- struct different label sets using the WordNet tree. Each node of the tree represents a class and the leaf nodes are shown in red. Differ- ent label sets are iteratively constructed by clustering together all the leaf nodes with a common parent. In each iteration, only leaf nodes are clustered. This procedure results into a sequence of label sets for 1.2M images, where each consequent set contains labels coarser than the previous one. Because the WordNet tree is im- balanced, even after multiple iterations, label sets contain some classes that are present in the 1000 way ImageNet challenge. A reasonable middle ground between the expensive, fully-supervised pre-training and free unsupervised pre- training is to use weak supervision. For example, [18] use the YFCC100M dataset of 100 million Flickr images la- beled with noisy user tags as pre-training instead of Ima- geNet. But yet again, even though YFCC100M is almost two orders of magnitude larger than ImageNet, somewhat surprisingly, the resulting features do not appear to give any substantial boost over these pre-trained on ImageNet. Overall, despite keen interest in this problem, alterna- tive methods for learning general-purpose deep features have not managed to outperform ImageNet-supervised pre- training on transfer tasks. The goal of this work is to try and understand what is the secret to ImageNetâ | 1608.08614#10 | 1608.08614#12 | 1608.08614 | [
"1507.06550"
] |
1608.08614#12 | What makes ImageNet good for transfer learning? | s continuing success. # 3. Experimental Setup The process of using supervised learning to initialize CNN parameters using the task of ImageNet classiï¬ cation is referred to as pre-training. The process of adapting pre- trained CNN to continuously train on a target dataset is referred to as ï¬ netuning. All of our experiments use the Caffe [17] implementation of the a single network architec- ture proposed by Krizhevsky et al. [22]. We refer to this architecture as AlexNet. We closely follow the experimental setup of Agrawal et al. [2] for evaluating the generalization of pre-trained features on three transfer tasks: PASCAL VOC 2007 ob- ject detection (PASCAL-DET), PASCAL VOC 2012 action recognition (PASCAL-ACT-CLS) and scene classiï¬ cation on SUN dataset (SUN-CLS). â ¢ For PASCAL-DET, we used the PASCAL VOC 2007 train/val for ï¬ | 1608.08614#11 | 1608.08614#13 | 1608.08614 | [
"1507.06550"
] |
1608.08614#13 | What makes ImageNet good for transfer learning? | netuning using the experimental setup and 3 Pre-trained Dataset Original 127 Classes Random PASCAL 58.3 55.5 41.3 [21] SUN 52.2 48.7 35.7 [2] Table 1: The transfer performance of a network pre-trained us- ing 127 (coarse) classes obtained after top-down clustering of the WordNet tree is comparable to a transfer performance after ï¬ ne- tuning on all 1000 ImageNet classes. This indicates that ï¬ ne- grained recognition is not necessary for learning good transferable features. code provided by Faster-RCNN [34] and report perfor- mance on the test set. Finetuning on PASCAL-DET was performed by adapting the pre-trained convolution layers of AlexNet. The model was trained for 70K iterations using stochastic gradient descent (SGD), with an initial learning rate of 0.001 with a reduction by a factor of 10 at 40K iteration. | 1608.08614#12 | 1608.08614#14 | 1608.08614 | [
"1507.06550"
] |
1608.08614#14 | What makes ImageNet good for transfer learning? | â ¢ For PASCAL-ACT-CLS, we used PASCAL VOC 2012 train/val for ï¬ netuning and testing using the experimen- tal setup and code provided by R*CNN [13]. The ï¬ ne- tuning process for PASCAL-ACT-CLS mimics the pro- cedure described for PASCAL-DET. â ¢ For SUN-CLS we used the same train/val/test splits as used by [2]. Finetuning on SUN was performed by ï¬ rst replacing the FC-8 layer in the AlexNet model with a ran- domly initialized, and fully connected layer with 397 out- put units. Finetuning was performed for 50K iterations using SGD with an initial learning rate of 0.001 which was reduced by a factor of 10 every 20K iterations. Faster-RCNN and R*CNN are known to have variance across training runs; we therefore run it three times and re- port the mean ± standard deviation. On the other hand, [2], reports little variance between runs on SUN-CLS so we re- port our result using a single run. In some experiments we pre-train on ImageNet using a different number of images per class. The model with 1000 images/class uses the original ImageNet ILSVRC 2012 training set. Models with N images/class for N < 1000 are trained by drawing a random sample of N images from all images of that class made available as part of the ImageNet training set. | 1608.08614#13 | 1608.08614#15 | 1608.08614 | [
"1507.06550"
] |
1608.08614#15 | What makes ImageNet good for transfer learning? | # 4. How does the amount of pre-training data affect transfer performance? For answering this question, we trained 5 different AlexNet models from scratch using 50, 125, 250, 500 and 1000 images per each of the 1000 ImageNet classes using the procedure described in Section 3. The variation in per- formance with amount of pre-training data when these mod- els are ï¬ netuned for PASCAL-DET, PASCAL-ACT-CLS | 1608.08614#14 | 1608.08614#16 | 1608.08614 | [
"1507.06550"
] |
1608.08614#16 | What makes ImageNet good for transfer learning? | â ¢ Baseline Accuracy So Fo Top 1 Nearest Neighbors Accuracy N 918 Classes 753 Classes 486 Classes 127 Classes 79 Classes 9 Classes (104) (303) (620) (979) (1000) (1000) ° Random (1000) 2 8 ® Induction Accuracy LL mS ms Soe & Top 5 Nearest Neighbors Accuracy 6 â ¢ Baseline Accuracy ] | â ¢ Induction Accuracy 918 Classes 753 Classes 486 Classes 127 Classes 79Classes 9Classes Random (104) (303) (620) (979) (1000) (1000) (1000) () â ¢ Baseline Accuracy So Fo Top 1 Nearest Neighbors Accuracy N 918 Classes 753 Classes 486 Classes 127 Classes 79 Classes 9 Classes (104) (303) (620) (979) (1000) (1000) ° Random (1000) ® Induction Accuracy LL 2 8 mS ms Soe & Top 5 Nearest Neighbors Accuracy 6 â ¢ Baseline Accuracy ] | â ¢ Induction Accuracy 918 Classes 753 Classes 486 Classes 127 Classes 79Classes 9Classes Random (104) (303) (620) (979) (1000) (1000) (1000) () Figure 4: Does a CNN trained for discriminating between coarse classes learns a feature embedding capable of distinguishing between ï¬ ne classes? We quantiï¬ ed this by measuring the induction accuracy deï¬ ned as following: after training a feature embedding for a particular set of classes (set A), the induction accuracy is the nearest neighbor (top-1 and top-5) classiï¬ cation accuracy measured in the FC8 feature space of the subset of 1000 ImageNet classes not present in set A. The syntax on the x-axis A Classes(B) indicates that the network was trained with A classes and the induction accuracy was measured on B classes. The baseline accuracy is the accuracy on B classes when the CNN was trained for all 1000 classes. The margin between the baseline and the induction accuracy indicates a drop in the networkâ s ability to distinguish ï¬ ne classes when being trained on coarse classes. The results show that features learnt by pre-training on just 127 classes still lead to fairly good induction. | 1608.08614#15 | 1608.08614#17 | 1608.08614 | [
"1507.06550"
] |
1608.08614#17 | What makes ImageNet good for transfer learning? | and SUN-CLS is shown in Figure 1. For PASCAL-DET, the mean average precision (mAP) for CNNs with 1000, 500 and 250 images/class is found to be 58.3, 57.0 and 54.6. A similar trend is observed for PASCAL-ACT-CLS and SUN- CLS. These results indicate that using half the amount of pre-training data leads to only a marginal reduction in per- formance on transfer tasks. It is important to note that the performance on the ImageNet classiï¬ cation task (the pre- training task) steadily increases with the amount of training data, whereas on transfer tasks, the performance increase with respect to additional pre-training data is signiï¬ cantly slower. This suggests that while adding additional exam- ples to ImageNet classes will improve the ImageNet per- formance, it has diminishing return for transfer task perfor- mance. # 5. How does the taxonomy of the pre-training task affect transfer performance? In the previous section we investigated how varying number of pre-training images per class effects the perfor- mance in transfer tasks. | 1608.08614#16 | 1608.08614#18 | 1608.08614 | [
"1507.06550"
] |
1608.08614#18 | What makes ImageNet good for transfer learning? | Here we investigate the ï¬ ip side: keeping the amount of data constant while changing the nomenclature of training labels. # 5.1. The effect of number of pre-training classes on transfer performance down clustering). Using bottom up clustering, 18 possible taxonomies can be generated. Among these, we chose 5 sets of labels constituting 918, 753, 486, 79 and 9 classes respectively. Using top-down clustering only 3 label sets of 127, 10 and 2 can be generated, and we used the one with 127 classes. For studying the effect of number of pre- training classes on transfer performance, we trained sepa- rate AlexNet CNNs from scratch using these label sets. Figure 2 shows the effect of number of pre-training classes obtained using bottom up clustering of WordNet tree on transfer performance. We also include the performance of these different networks on the Imagenet classiï¬ cation task itself after ï¬ netuning only the last layer to distinguish between all the 1000 classes. The results show that increase in performance on transfer tasks is signiï¬ cantly slower with increase in number of classes as compared to performance on Imagenet itself. Using only 486 classes results in a per- formance drop of 1.7 mAP for PASCAL-DET, 0.8% accu- racy for SUN-CLS and a boost of 0.6 mAP for PASCAL- ACT-CLS. Table 1 shows the transfer performance after pre-training with 127 classes obtained from top down clus- tering. The results from this table and the ï¬ gure indicate that only diminishing returns in transfer performance are observed when more than 127 classes are used. Our results also indicate that making the ImageNet classes ï¬ ner will not help improve transfer performance. The 1000 classes of the ImageNet challenge [35] are de- rived from leaves of the WordNet tree [11]. Using this tree, it is possible to generate different class taxonomies while keeping the total number of images constant. One can gen- erate taxonomies in two ways: (1) bottom up clustering, wherein the leaf nodes belonging to a common parent are iteratively clustered together (see Figure 3), or (2) by ï¬ | 1608.08614#17 | 1608.08614#19 | 1608.08614 | [
"1507.06550"
] |
1608.08614#19 | What makes ImageNet good for transfer learning? | x- top ing the distance of the nodes from the root node (i.e. It can be argued that the PASCAL task requires discrim- ination between only 20 classes and therefore pre-training with only 127 classes should not lead to substantial reduc- tion in performance. However, the trend also holds true for SUN-CLS that requires discrimination between 397 classes. These two results taken together suggest that although train- ing with a large number of classes is beneï¬ cial, diminishing returns are observed beyond using 127 distinct classes for | 1608.08614#18 | 1608.08614#20 | 1608.08614 | [
"1507.06550"
] |
1608.08614#20 | What makes ImageNet good for transfer learning? | 4 Induction < 2 oO =) uel £ Figure 5: Can feature embeddings obtained by training on coarse classes be able to distinguish ï¬ ne classes they were never trained on? E.g. by training on monkeys, can the network pick out macaques? Here we look at the FC7 nearest neighbors (NN) of two randomly sampled images: a macaque (left column) and a giant schnauzer (right column), with each row showing feature embeddings trained with different number of classes (from ï¬ | 1608.08614#19 | 1608.08614#21 | 1608.08614 | [
"1507.06550"
] |
1608.08614#21 | What makes ImageNet good for transfer learning? | ne to coarse). The row(s) above the dotted line indicate that the image class (i.e. macaque/giant schnauzer) was one of the training classes, whereas in rows below the image class was not present in the training set. Images in green indicate that the NN image belongs to the correct ï¬ ne class (i.e. either macaque or giant schnauzer); orange indicates the correct coarse class (based on the WordNet hierarchy) but incorrect ï¬ ne class; red indicated incorrect coarse class. All green images below the dotted line indicate instances of correct ï¬ ne-grain nearest neighbor retrieval for features that were never trained on that class. # pre-training. Furthermore, for PASCAL-ACT-CLS and SUN-CLS, ï¬ netuning on CNNs pre-trained with class set sizes of 918, and 753 actually results in better performance than using all 1000 classes. This may indicate that having too many classes for pre-training works against learning good gener- alizable features. Hence, when generating a dataset, one should be attentive of the nomenclature of the classes. # 5.2. Is ï¬ | 1608.08614#20 | 1608.08614#22 | 1608.08614 | [
"1507.06550"
] |
1608.08614#22 | What makes ImageNet good for transfer learning? | ne-grain recognition necessary for learning transferable features? # 5.3. Does training with coarse classes induce fea- tures relevant for ï¬ ne-grained recognition? Earlier, we have shown that the features learned on the 127 coarse classes perform almost as well on our transfer tasks as the full set of 1000 ImageNet classes. Here we will probe this further by asking a different question: is the feature embedding induced by the coarse class classiï¬ ca- tion task capable of separating the ï¬ ne labels of ImageNet (which it never saw at training)? ImageNet challenge requires a classiï¬ er to distinguish between 1000 classes, some of which are very ï¬ ne-grained, such as different breeds of dogs and cats. Indeed, most hu- mans do not perform well on ImageNet unless speciï¬ cally trained [35], and yet are easily able to perform most every- day visual tasks. This raises the question: is ï¬ ne-grained recognition necessary for CNN models to learn good fea- ture representations, or is coarse-grained object recognition (e.g. just distinguishing cats from dogs) is sufï¬ cient? To investigate this, we used top-1 and top-5 nearest neighbors in the FC7 feature space to measure the ac- curacy of identifying ï¬ ne-grained ImageNet classes after training only on a set of coarse classes. We call this mea- sure, â induction accuracyâ . As a qualitative example, Fig- ure 5 shows nearest neighbors for a macaque (left) and a schnauzer (right) for feature embeddings trained on Ima- geNet but with different number of classes. All green- border images below the dotted line indicate instances of correct ï¬ ne-grain nearest neighbor retrieval for features that were never trained on that class. Note that the label set of 127 classes from the previous experiment contains 65 classes that are present in the origi- nal set of 1000 classes and the remainder are inner nodes of the WordNet tree. However, all these 127 classes (see sup- plementary materials) represent coarse semantic concepts. As discussed earlier, pre-training with these classes results in only a small drop in transfer performance (see Table 1). This suggests that performing ï¬ | 1608.08614#21 | 1608.08614#23 | 1608.08614 | [
"1507.06550"
] |
1608.08614#23 | What makes ImageNet good for transfer learning? | ne-grained recognition is only marginally helpful and does not appear to be critical for learning good transferable features. Quantitative results are shown in Figure 4. The results show that when 127 classes are used, ï¬ ne-grained recogni- tion k-NN performance is only about 15% lower compared to training directly for these ï¬ ne-grained classes (i.e. base- line accuracy). This is rather surprising and suggests that CNNs implicitly discover features capable of distinguish- ing between ï¬ ner classes while attempting to distinguish between relatively coarse classes. | 1608.08614#22 | 1608.08614#24 | 1608.08614 | [
"1507.06550"
] |
1608.08614#24 | What makes ImageNet good for transfer learning? | 5 mammal (17%) snake (13%) arthropod (12%) turtle (10%) tool (3%) covering (3%) fabric (2%) fungus (2%) game equipment (2%) stick (1%) mollusk (1%) boat (1%) home appliance (1%) container (8%) garment (8%) structure (7%) fruit (7%) bird (7%) Figure 6: Does the network learn to discriminate coarse seman- tic concepts by training only on ï¬ ner sub-classes? The degree to which the concept of coarse class is learnt was quantiï¬ ed by mea- suring the difference (in percentage points) between the accuracy of classifying the coarse class and the average accuracy of indi- vidually classifying all the sub-classes of this coarse class. Here, the top and bottom classes sorted by this metric are shown using the label set of size 127 with classes with at least 5 subclasses. We observe that classes whose subclasses are visually consistent (e.g. mammal) are better represented than these that are visually dissimilar (e.g. home appliance). # 5.4. Does training with ï¬ ne-grained classes induce features relevant for coarse recognition? Investigating whether the network learns features rel- evant for ï¬ ne-grained recognition by training on coarse classes raises the reverse question: does training with ï¬ ne- grained classes induce features relevant for coarse recog- nition? If this is indeed the case, then we would expect that when a CNN makes an error, it is more likely to con- fuse a sub-class (i.e. error in ï¬ ne-grained recognition) with other sub-classes of the same coarse class. This effect can be measured by computing the difference between the accu- racy of classifying the coarse class and the average accuracy of individually classifying all the sub-classes of this coarse class (please see supplementary materials for details). Figure 6 shows the results. | 1608.08614#23 | 1608.08614#25 | 1608.08614 | [
"1507.06550"
] |
1608.08614#25 | What makes ImageNet good for transfer learning? | We ï¬ nd that coarse seman- that contain tic classes such as mammal, fruit, bird, etc. visually similar sub-classes show the hypothesized effect, whereas classes such as tool and home appliance that con- tain visually dissimilar subclasses do not exhibit this effect. These results indicate that subclasses that share a common visual structure allow the CNN to learn features that are more generalizable. This might suggest a way to improve feature generalization by making class labels respect visual commonality rather than simply WordNet semantics. | 1608.08614#24 | 1608.08614#26 | 1608.08614 | [
"1507.06550"
] |
1608.08614#26 | What makes ImageNet good for transfer learning? | # 5.5. More Classes or More Examples Per Class? Results in previous sections show that it is possible to achieve good performance on transfer tasks using signiï¬ - cantly less pre-training data and fewer pre-training classes. However it is unclear what is more important â the number of classes or the number or examples per class. One ex- 6 Dataset Data size More examples/class More classes 500K 57.1 57.0 PASCAL 250K 54.8 52.5 SUN 125K 500K 250K 125K 42.2 50.6 42.3 49.8 50.6 49.7 45.7 46.7 Table 2: For a ï¬ | 1608.08614#25 | 1608.08614#27 | 1608.08614 | [
"1507.06550"
] |
1608.08614#27 | What makes ImageNet good for transfer learning? | xed budget of pre-training data, is it better to have more examples per class and fewer classes or vice-versa? The row â more examples/classâ was pretrained with subsets of Ima- geNet containing 500, 250 and 125 classes with 1000 examples each. The row â more classesâ was pretrained with 1000 classes, but 500, 250 and 125 examples each. Interestingly, the transfer performance on both PASCAL and SUN appears to be broadly similar under both scenarios. Pre-trained Dataset ImageNet Pascal removed ImageNet Places PASCAL 58.3 ± 0.3 57.8 ± 0.1 53.8 ± 0.1 Table 3: PASCAL-DET results after pre-training on entire Im- ageNet, PASCAL-removed-ImageNet and Places data sets. Re- moving PASCAL classes from ImageNet leads to an insigniï¬ cant reduction in performance. treme is to only have 1 class and all 1.2M images from this class and the other extreme is to have 1.2M classes and 1 image per class. It is clear that both ways of splitting the data will result in poor generalization, so the answer must lie somewhere in-between. To investigate this, we split the same amount of pre- training data in two ways: (1) more classes with fewer im- ages per class, and (2) fewer classes with more images per class. We use datasets of size 500K, 250K and 125K im- ages for this experiment. For 500K images, we considered two ways of constructing the training set â (1) 1000 classes with 500 images/class, and (2) 500 classes with 1000 im- ages/class. Similar splits were made for data budgets of 250K and 125K images. The 500, 250 and 125 classes for these experiments were drawn from a uniform distribution among the 1000 ImageNet classes. Similarly, the image subsets containing 500, 250 and 125 images were drawn from a uniform distribution among the images that belong to the class. | 1608.08614#26 | 1608.08614#28 | 1608.08614 | [
"1507.06550"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.