doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1705.08292
40
P(eyy~p [sign (wS°?, x) Ay | aE] =0, 12 while the error for AdaGrad is error for AdaGrad is Pee.y)xD [sign (wt® x) #y|7€] = Piey)~D [si en (wt* 0) # y|(z,y)€D 7] Peay)~ pl(z,y) € D | 7€] + Pwoy~p [sig un re) FUL a) €D.E] Pree pl(z,y) € D | ~€] p> 0-—+(1 -u-n(i-2).Otherwise, if there is a repeat, we have trivial bounds of 0 and 1 for the conditional error in each case: 0< Pway~d [sign (w8SP, x) #y| é] <1, 0< Pyed [sign (wt, x) Fy| €] <1. Putting these together, we find that the unconditional error for SGD is bounded above by P(x,y)∼D [sign (wS°?, 2) % y]
1705.08292#40
The Marginal Value of Adaptive Gradient Methods in Machine Learning
Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks.
http://arxiv.org/pdf/1705.08292
Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht
stat.ML, cs.LG
null
null
stat.ML
20170523
20180522
[ { "id": "1703.10622" }, { "id": "1702.03849" }, { "id": "1611.07004" } ]
1705.08292
41
[sign (wS°?, 2) % y] Prawn [sign (wS°?, 2) % y] = Pray)~d sign (w°% Pe) £y | WE] PLE] + Pooyno [sign (ws? x) # y | E] PIE =0-P[-E] + Pw,y)~p [sign (wS?, x) 4 y | E] P(E] <0-P[H€]+1-P[€] n? <=, Son’ while the unconditional error for AdaGrad is bounded below by [sign (w**, 2) A y] # P(x,y)∼D Pxy)~v [sign (w**, 2) A y] = Pog) ~0 [sen (wi, 2) #9 |E] PLE + Peogymo [sen (ui, 2) # 9 |€] PIE] (1—p) (1 - *) P(E] + Peg wp [sign (w**, 2) # y | E] PIE] > (1p) (1-2) Pe] +0-PEE] Let € > 0 be a tolerance. For the error of SGD to be at most ¢, it suffices to take N > me case we have
1705.08292#41
The Marginal Value of Adaptive Gradient Methods in Machine Learning
Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks.
http://arxiv.org/pdf/1705.08292
Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht
stat.ML, cs.LG
null
null
stat.ML
20170523
20180522
[ { "id": "1703.10622" }, { "id": "1702.03849" }, { "id": "1611.07004" } ]
1705.08292
42
Let € > 0 be a tolerance. For the error of SGD to be at most ¢, it suffices to take N > me case we have 2 P(ey)~D [sign (w8SP, x) x y] < on <e. For the error of AdaGrad to be at least (1 — p) (1 — 6), it suffices to take N > we assuming n > 2, in which case we have Pw,y)~p [sign (w*"*, x) # y] > (1 —p) (1 - x) (1 - x) 20-m (1-4) (1-4) 0-0 (1-§) (5) = (1-7) (1-e+ 5) >(1-p)(1—6). # oan Both of these conditions will be satisfied by taking N ≥ max 2 =f, Since the choice of € was arbitrary, taking € — 0 drives the SGD error to 0 and the AdaGrad error to 1 — p, matching the original result in the non-i.i.d. setting. 13 # in which # D Step sizes used for parameter tuning # Cifar-10 • SGD: {2, 1, 0.5 (best), 0.25, 0.05, 0.01}
1705.08292#42
The Marginal Value of Adaptive Gradient Methods in Machine Learning
Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks.
http://arxiv.org/pdf/1705.08292
Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht
stat.ML, cs.LG
null
null
stat.ML
20170523
20180522
[ { "id": "1703.10622" }, { "id": "1702.03849" }, { "id": "1611.07004" } ]
1705.08292
43
# Cifar-10 • SGD: {2, 1, 0.5 (best), 0.25, 0.05, 0.01} • HB: {2, 1, 0.5 (best), 0.25, 0.05, 0.01} • AdaGrad: {0.1, 0.05, 0.01 (best, def.), 0.0075, 0.005} • RMSProp: {0.005, 0.001, 0.0005, 0.0003 (best), 0.0001} • Adam: {0.005, 0.001 (default), 0.0005, 0.0003 (best), 0.0001, 0.00005} The default Torch step sizes for SGD (0.001) , HB (0.001), and RMSProp (0.01) were outside the range we tested. # War & Peace • SGD: {2, 1 (best), 0.5, 0.25, 0.125} • HB: {2, 1 (best), 0.5, 0.25, 0.125} • AdaGrad: {0.4, 0.2, 0.1, 0.05 (best), 0.025}
1705.08292#43
The Marginal Value of Adaptive Gradient Methods in Machine Learning
Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks.
http://arxiv.org/pdf/1705.08292
Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht
stat.ML, cs.LG
null
null
stat.ML
20170523
20180522
[ { "id": "1703.10622" }, { "id": "1702.03849" }, { "id": "1611.07004" } ]
1705.08292
44
• AdaGrad: {0.4, 0.2, 0.1, 0.05 (best), 0.025} • RMSProp: {0.02, 0.01, 0.005, 0.0025, 0.00125, 0.000625, 0.0005 (best), 0.0001} • Adam: {0.005, 0.0025, 0.00125, 0.000625 (best), 0.0003125, 0.00015625} Under the fixed-decay scheme, we selected learning rate decay frequencies from the set {10, 20, 40, 80, 120, 160, ∞} and learning rate decay amounts from the set {0.1, 0.5, 0.8, 0.9}. # Discriminative Parsing • SGD: {1.0, 0.5, 0.2, 0.1 (best), 0.05, 0.02, 0.01} • HB: {1.0, 0.5, 0.2, 0.1, 0.05 (best), 0.02, 0.01, 0.005, 0.002}
1705.08292#44
The Marginal Value of Adaptive Gradient Methods in Machine Learning
Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks.
http://arxiv.org/pdf/1705.08292
Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht
stat.ML, cs.LG
null
null
stat.ML
20170523
20180522
[ { "id": "1703.10622" }, { "id": "1702.03849" }, { "id": "1611.07004" } ]
1705.08292
45
• AdaGrad: {1.0, 0.5, 0.2, 0.1, 0.05, 0.02 (best), 0.01, 0.005, 0.002, 0.001, 0.0005, 0.0002, 0.0001} • RMSProp: Not implemented in DyNet at the time of writing. • Adam: {0.01, 0.005, 0.002 (best), 0.001 (default), 0.0005, 0.0002, 0.0001} # Generative Parsing • SGD: {1.0, 0.5 (best), 0.25, 0.1, 0.05, 0.025, 0.01} • HB: {0.25, 0.1, 0.05, 0.02, 0.01 (best), 0.005, 0.002, 0.001} • AdaGrad: {5.0, 2.5, 1.0, 0.5, 0.25 (best), 0.1, 0.05, 0.02, 0.01} • RMSProp: {0.05, 0.02, 0.01, 0.005, 0.002 (best), 0.001, 0.0005, 0.0002, 0.0001}
1705.08292#45
The Marginal Value of Adaptive Gradient Methods in Machine Learning
Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks.
http://arxiv.org/pdf/1705.08292
Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht
stat.ML, cs.LG
null
null
stat.ML
20170523
20180522
[ { "id": "1703.10622" }, { "id": "1702.03849" }, { "id": "1611.07004" } ]
1705.08045
0
7 1 0 2 v o N 7 2 ] V C . s c [ 5 v 5 4 0 8 0 . 5 0 7 1 : v i X r a # Learning multiple visual domains with residual adapters Sylvestre-Alvise Rebuffi! Hakan Bilen!? Andrea Vedaldi! ! Visual Geometry Group University of Oxford {srebuffi,hbilen, vedaldi}@robots.ox.ac.uk 2 School of Informatics University of Edinburgh # Abstract There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to perform well uniformly. # 1 Introduction
1705.08045#0
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
1
# Sinno Jialin Pan Nanyang Technological University, Singapore [email protected] # Abstract How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. By controlling layer-wise errors properly, one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods. Codes of our work are released at: https://github.com/csyhhu/L-OBS. # 1 Introduction
1705.07565#1
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
1
# 1 Introduction While research in machine learning is often directed at improving the performance of algorithms on specific tasks, there is a growing interest in developing methods that can tackle a large variety of different problems within a single model. In the case of perception, there are two distinct aspects of this challenge. The first one is to extract from a given image diverse information, such as image-level labels, semantic segments, object bounding boxes, object contours, occluding boundaries, vanishing points, etc. The second aspect is to model simultaneously many different visual domains, such as Internet images, characters, glyph, animal breeds, sketches, galaxies, planktons, etc (fig. 1). In this work we explore the second challenge and look at how deep learning techniques can be used to learn universal representations [5], i.e. feature extractors that can work well in several different image domains. We refer to this problem as multiple-domain learning to distinguish it from the more generic multiple-task learning.
1705.08045#1
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
2
# 1 Introduction Intuitively, deep neural networks [1] can approximate predictive functions of arbitrary complexity well when they are of a huge amount of parameters, i.e., a lot of layers and neurons. In practice, the size of deep neural networks has been being tremendously increased, from LeNet-5 with less than 1M parameters [2] to VGG-16 with 133M parameters [3]. Such a large number of parameters not only make deep models memory intensive and computationally expensive, but also urge researchers to dig into redundancy of deep neural networks. On one hand, in neuroscience, recent studies point out that there are significant redundant neurons in human brain, and memory may have relation with vanishment of specific synapses [4]. On the other hand, in machine learning, both theoretical analysis and empirical experiments have shown the evidence of redundancy in several deep models [5, 6]. Therefore, it is possible to compress deep neural networks without or with little loss in prediction by pruning parameters with carefully designed criteria.
1705.07565#2
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
2
Multiple-domain learning contains in turn two sub-challenges. The first one is to develop algorithms that can learn well from many domains. If domains are learned sequentially, but this is not a requirement, this is reminiscent of domain adaptation. However, there are two important differences. First, in standard domain adaptation (e.g. [9]) the content of the images (e.g. “telephone”) remains the same, and it is only the style of the images that changes (e.g. real life vs gallery image). Instead in our case a domain shift changes both style and content. Secondly, the difficulty is not just to adapt the model from one domain to another, but to do so while making sure that it still performs well on the original domain, i.e. to learn without forgetting [21]. The second challenge of multiple-domain learning, and our main concern in this paper, is to construct models that can represent compactly all the domains. Intuitively, even though images in different domains may look quite different (e.g. glyph vs. cats), low and mid-level visual primitives may still 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1705.08045#2
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
3
However, finding an optimal pruning solution is NP-hard because the search space for pruning is exponential in terms of parameter size. Recent work mainly focuses on developing efficient algorithms to obtain a near-optimal pruning solution [7, 8, 9, 10, 11]. A common idea behind most exiting approaches is to select parameters for pruning based on certain criteria, such as increase in training error, magnitude of the parameter values, etc. As most of the existing pruning criteria are 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. designed heuristically, there is no guarantee that prediction performance of a deep neural network can be preserved after pruning. Therefore, a time-consuming retraining process is usually needed to boost the performance of the trimmed neural network. Instead of consuming efforts on a whole deep network, a layer-wise pruning method, Net-Trim, was proposed to learn sparse parameters by minimizing reconstructed error for each individual layer [6]. A theoretical analysis is provided that the overall performance drop of the deep network is bounded by the sum of reconstructed errors for each layer. In this way, the pruned deep network has a theoretical guarantee on its error. However, as Net-Trim adopts ¢;-norm to induce sparsity for pruning, it fails to obtain high compression ratio compared with other methods (9|{I1].
1705.07565#3
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
3
= A. s? ae OO T GE] ES “aS Figure 1: Visual Decathlon. We explore deep architectures that can learn simultaneously different tasks from very different visual domains. We experiment with ten representative ones: (a) Aircraft, (b) CIFAR- 100, (c) Daimler Pedestrians, (d) Describable Textures, (e) German Traffic Signs, (f) ILSVRC (ImageNet) 2012, (g) VGG-Flowers, (h) OmniGlot, (i) SVHN, (j) UCF101 Dynamic Images. be largely shareable. Sharing knowledge between domains should allow to learn compact multivalent representations. Provided that sufficient synergies between domains exist, multivalent representations may even work better than models trained individually on each domain (for a given amount of training data). The primary contribution of this paper (section 3) is to introduce a design for multivalent neural network architectures for multiple-domain learning (section 3 fig. 2). The key idea is reconfigure a deep neural network on the fly to work on different domains as needed. Our construction is based on recent learning-to-learn methods that showed how the parameters of a deep network can be predicted from another
1705.08045#3
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
4
In this paper, we propose a new layer-wise pruning method for deep neural networks, aiming to achieve the following three goals: 1) For each layer, parameters can be highly compressed after pruning, while the reconstructed error is small. 2) There is a theoretical guarantee on the overall prediction performance of the pruned deep neural network in terms of reconstructed errors for each layer. 3) After the deep network is pruned, only a light retraining process is required to resume its original prediction performance.
1705.07565#4
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
4
to work on different domains as needed. Our construction is based on recent learning-to-learn methods that showed how the parameters of a deep network can be predicted from another [2, 16]. We show that these formulations are equivalent to packing the adaptation parameters in convolutional layers added to the network (section 3). The layers in the resulting parametric network are either domain-agnostic, hence shared between domains, or domain-specific, hence parametric. The domain-specific layers are changed based on the ground-truth domain of the input image, or based on an estimate of the latter obtained from an auxiliary network. In the latter configuration, our architecture is analogous to the /earnet of [2]. Based on such general observations, we introduce in particular a residual adapter module and use it to parameterize the standard residual network architecture of [13]. The adapters contain a small fraction of the model parameters (less than 10%) enabling a high-degree of parameter sharing between domains. A similar architecture was concurrently proposed in [31], which also results in the possibility of learning new domains sequentially without forgetting. However, we also show a specific advantage of the residual adapter modules: the ability to
1705.08045#4
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
5
To achieve our first goal, we borrow an idea from some classic pruning approaches for shallow neural networks, such as optimal brain damage (OBD) [12] and optimal brain surgeon (OBS) [13]. These classic methods approximate a change in the error function via functional Taylor Series, and identify unimportant weights based on second order derivatives. Though these approaches have proven to be effective for shallow neural networks, it remains challenging to extend them for deep neural networks because of the high computational cost on computing second order derivatives, i.e., the inverse of the Hessian matrix over all the parameters. In this work, as we restrict the computation on second order derivatives w.r.t. the parameters of each individual layer only, i.e., the Hessian matrix is only over parameters for a specific layer, the computation becomes tractable. Moreover, we utilize characteristics of back-propagation for fully-connected layers in well-trained deep networks to further reduce computational complexity of the inverse operation of the Hessian matrix.
1705.07565#5
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
5
which also results in the possibility of learning new domains sequentially without forgetting. However, we also show a specific advantage of the residual adapter modules: the ability to modulate adaptation based on the size of the target dataset. Our proposed architectures are thoroughly evaluated empirically (section 5). To this end, our second contribution is to introduce the visual decathlon challenge (fig. 1 and section 4), a new benchmark for multiple-domain learning in image recognition. The challenge consists in performing well simultaneously on ten very different visual classification problems, from ImageNet and SVHN to action classification and describable texture recognition. The evaluation metric, also inspired by the decathlon discipline, rewards models that perform better than strong baselines on all the domains simultaneously. A summary of our finding is contained in section 6. 2 Related Work Our work touches on multi-task learning, learning without forgetting, domain adaptation, and other areas. However, our multiple-domain setup differs in ways that make most of the existing approaches not directly applicable to our problem. Multi-task learning (MTL) looks at developing models that can address different tasks, such as detecting objects and segmenting images, while sharing information and computation among
1705.08045#5
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
6
To achieve our second goal, based on the theoretical results in [6], we provide a proof on the bound of performance drop before and after pruning in terms of the reconstructed errors for each layer. With such a layer-wise pruning framework using second-order derivatives for trimming parameters for each layer, we empirically show that after significantly pruning parameters, there is only a little drop of prediction performance compared with that before pruning. Therefore, only a light retraining process is needed to resume the performance, which achieves our third goal. The contributions of this paper are summarized as follows. 1) We propose a new layer-wise pruning method for deep neural networks, which is able to significantly trim networks and preserve the prediction performance of networks after pruning with a theoretical guarantee. In addition, with the proposed method, a time-consuming retraining process for re-boosting the performance of the pruned network is waived. 2) We conduct extensive experiments to verify the effectiveness of our proposed method compared with several state-of-the-art approaches. # 2 Related Works and Preliminary
1705.07565#6
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
6
learning (MTL) looks at developing models that can address different tasks, such as detecting objects and segmenting images, while sharing information and computation among them. Earlier examples of this paradigm have focused on kernel methods [10, 1] and deep neural network (DNN) models [6]. In DNNs, a standard approach [6] is to share earlier layers of the network, training the tasks jointly by means of back-propagation. Caruana [6] shows that sharing network parameters between tasks is beneficial also as a form of regularization, putting additional constraints on the learned representation and thus improving it. MTL in DNNs has been applied to various problems ranging from natural language processing [8, 22], speech recognition [14] to computer vision [41, 42, 4]. Collobert et al. [8] show that semi-supervised learning and multi-task learning can be combined in a DNN model to solve several language processing prediction tasks such as part-of-speech tags, chunks, named entity tags and semantic
1705.08045#6
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
7
# 2 Related Works and Preliminary Pruning methods have been widely used for model compression in early neural networks [7] and modern deep neural networks [6, 8, 9, 10, 11]. In the past, with relatively small size of training data, pruning is crucial to avoid overfitting. Classical methods include OBD and OBS. These methods aim to prune parameters with the least increase of error approximated by second order derivatives. However, computation of the Hessian inverse over all the parameters is expensive. In OBD, the Hessian matrix is restricted to be a diagonal matrix to make it computationally tractable. However, this approach implicitly assumes parameters have no interactions, which may hurt the pruning performance. Different from OBD, OBS makes use of the full Hessian matrix for pruning. It obtains better performance while is much more computationally expensive even using Woodbury matrix identity [14], which is an iterative method to compute the Hessian inverse. For example, using OBS 133M. on VGG-16 naturally requires to compute inverse of the Hessian matrix with a size of 133M Regarding pruning for modern deep models, Han et al. [9] proposed to delete unimportant parameters based on magnitude of their absolute values, and retrain the remaining ones to recover the original prediction performance. This method achieves considerable compression ratio in practice. However, 2 ×
1705.07565#7
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
7
roles. Huang ef al. [14] propose a shared multilingual DNN which shares hidden layers across many languages. Liu ef al. [22] combine multiple-domain classification and information retrieval for ranking web search with a DNN. Multi-task DNN models are also reported to achieve performance gains in computer vision problems such as object tracking [41], facial-landmark detection [42], object and part detection [4], a collection of low-level and high-level vision tasks [18]. The main focus of these works is learning a diverse set of tasks in the same visual domain. In contrast, our paper focuses on learning a representation from a diverse set of domains. Our investigation is related to the recent paper of [5], which studied the “‘size” of the union of different visual domains measured in terms of the capacity of the model required to learn it. The authors propose to absorb different domain in a single neural network by tuning certain parameters in batch and instance normalization layers throughout the architecture; we show that our residual adapter modules, which include the latter as a special case, lead to far superior results.
1705.08045#7
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
8
2 × as pointed out by pioneer research work [12, 13], parameters with low magnitude of their absolute values can be necessary for low error. Therefore, magnitude-based approaches may eliminate wrong parameters, resulting in a big prediction performance drop right after pruning, and poor robustness before retraining [15]. Though some variants have tried to find better magnitude-based criteria [16, 17], the significant drop of prediction performance after pruning still remains. To avoid pruning wrong parameters, Guo et al. [11] introduced a mask matrix to indicate the state of network connection for dynamically pruning after each gradient decent step. Jin et al. [18] proposed an iterative hard thresholding approach to re-activate the pruned parameters after each pruning phase. Besides Net-trim, which is a layer-wise pruning method discussed in the previous section, there is some other work proposed to induce sparsity or low-rank approximation on certain layers for pruning [19] [20]. However, as the @y-norm or the ¢;-norm sparsity-induced regularization term increases difficulty in optimization, the pruned deep neural networks using these methods either obtain much smaller compression ratio [6] compared with direct pruning methods or require retraining of the whole network to prevent accumulation of errors [10].
1705.07565#8
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
8
Life-long learning. A particularly important aspect of MTL is the ability of learning multiple tasks sequentially, as in Never Ending Learning [25] and Life-long Learning [38]. Sequential learning typically suffers in fact from forgetting the older tasks, a phenomenon aptly referred to as “catastrophic forgetting” in [11]. Recent work in life-long learning try to address forgetting in two ways. The first one [37, 33] is to freeze the network parameters for the old tasks and learn a new task by adding extra parameters. The second one aims at preserving knowledge of the old tasks by retaining the response of the original network on the new task [21, 30], or by keeping the network parameters of the new task close to the original ones [17]. Our method can be considered as a hybrid of these two approaches, as it can be used to retain the knowledge of previous tasks exactly, while adding a small number of extra parameters for the new tasks. Transfer learning. Sometimes one is interested in maximizing the performance of a model on a target domain. In this case, sequential learning can be used as a form of initialization[29]. This is very common in visual recognition, where most DNN are initialize on the ImageNet dataset and then fine-tuned on a target domain and task. Note, however, that this typically results in forgetting the original domain, a fact that we confirm in the experiments.
1705.08045#8
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
9
Optimal Brain Surgeon As our proposed layer-wise pruning method is an extension of OBS on deep neural networks, we briefly review the basic of OBS here. Consider a network in terms of parameters w trained to a local minimum in error. The functional Taylor series of the error w.r.t. w is: bB= (22)" dw + $5w' Hw + O (||dw||3), where 5 denotes a perturbation of a corresponding variable, H = 0?E / bw? € R™*" is the Hessian matrix, where m is the number of parameters, and O(||5@;||°) is the third and all higher order terms. For a network trained to a local minimum in error, the first term vanishes, and the term O(||5@,||) can be ignored. In OBS, the goal is to set one of the parameters to zero, denoted by we (scalar), to minimize JE in each pruning iteration. The resultant optimization problem is written as follows, min -dw' How, s.t. e) dw +w, = 0, (1) q 2 q
1705.07565#9
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
9
Domain adaptation. When domains are learned sequentially, our work can be related to domain adaptation. There is a vast literature in domain adaptation, including recent contributions in deep learning such as [12, 39] based on the idea of minimizing domain discrepancy. Long et al. [23] propose a deep network architecture for domain adaptation that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. There are two important differences with our work: First, in these cases different domains contain the same objects and is only the visual style that changes (e.g. webcam vs. DSLR), whereas in our case the object themselves change. Secondly, domain adaptation is a form of transfer learning, and, as the latter, is concerned with maximizing the performance on the target domain reagardless of potential forgetting. # 3 Method Our primary goal is to develop neural network architectures that can work well in a multiple-domain setting. Modern neural networks such as residual networks (ResNet [13]) are known to have very high capacity, and are therefore good candidates to learn from diverse data sources. Furthermore, even when domains look fairly different, they may still share a significant amount of low and mid-level visual patterns. Nevertheless, we show in the experiments (section 5) that learning a ResNet (or a similar model) directly from multiple domains may still not perform well.
1705.08045#9
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
10
min -dw' How, s.t. e) dw +w, = 0, (1) q 2 q where eq is the unit selecting vector whose q-th element is 1 and otherwise 0. As shown in [21], the optimization problem (1) can be solved by the Lagrange multipliers method. Note that a computation bottleneck of OBS is to calculate and store the non-diagonal Hesssian matrix and its inverse, which makes it impractical on pruning deep models which are usually of a huge number of parameters. # 3 Layer-wise Optimal Brain Surgeon # 3.1 Problem Statement
1705.07565#10
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
10
In order to address this problem, we consider a compact parametric family of neural networks a : X — V indexed by parameters a. Concretely, Y C R“*™*3 can be a space of RGB images and V = R#»*W»xC» 4 space of feature tensors. ¢, can then be obtained by taking all but the last classification layer of a standard ResNet model. The parametric feature extractors dq is then used to construct predictors for each domain d as ®g = aa ° ba,, Where aq are domain-specific parameters and Wa(v) = softmax(Wqv) is a domain-specific linear classifier V — Yy mapping features to image labels. If a@ comprises all the parameters of the feature extractor ¢,, this approach reduces to learning independent models for each domain. On the contrary, our goal is to maximize parameter sharing, which we do below by introducing certain network parametrizations. wi (af,ad) at’ (ai’,a¥’) wa (a3,a3) a3 (a3 ,ab) y 1 y 1 | BNP = | | BNL * | yeh &-[Bxp{+--[* 1 oeasbe
1705.08045#10
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
11
Given a training set of n instances, {(x;, yj) }/_1, and a well-trained deep neural network of L layers (excluding the input layer] Denote the input and the output of the whole deep neural network by X=[xi,...,Xn]€R*" and Y€R”™?, respectively. For a layer J, we denote the input and output of the layer by yi lolyt ty eR 1%" and Y'=[y}, ...,y4] eR *”, respectively, where y! can be considered as a representation of x; in layer /, and Y° = X, Y" = Y, and mo = d. Using one forward-pass step, we have Y!=o(Z'), where Z' = W,' Y'~! with W, €R™-1*™ being the matrix of parameters for layer J, and o(-) is the activation function. For convenience in presentation and proof, we define the activation function o(-) as the rectified linear unit (ReLU) [22]. We further denote by ©; €R™-?""*! the vectorization of W/. For a well-trained neural network, Y’, Z! and ©} are all fixed matrixes and contain most information of the neural
1705.07565#11
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
11
Figure 2: Residual adapter modules. The figure shows a standard residual module with the inclusion of adapter modules (in blue). The filter coefficients (w 1, w2) are domain-agnostic and contains the vast majority of the model parameters; (a1, @2) contain instead a small number of domain-specific parameters. # 3.1 Learning to learn and filter prediction The problem of adapting a neural network dynamically to variations of the input data is similar to the one found in recent approaches to learning to learn. A few authors [34, 16, 2], in particular, have proposed to learn neural networks that predict, in a data-dependent manner, the parameters of another. Formally, we can write ag = Aeg, where eq, is the indicator vector of the domain d,, of image x and A is a matrix whose columns are the parameter vectors ay. As shown later, it is often easy to construct an auxiliary network that can predict d from 2, so that the parameter a r) can also be expressed as the output of a neural network. If d is known, then o(a, d) = aq as before, and if not can be constructed as suggested above or from scratch as done in [2].
1705.08045#11
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.08045
12
The result of this construction is a network 4,2) (a) whose parameters are predicted by a second network y(2). As noted in [2], while this construction is conceptually simple, its implementation is more subtle. Recall that the parameters w of a deep convolutional neural network consist primarily of the coefficients of the linear filters in the convolutional layers. If w = a, then a = (a) would need to predict millions of parameters (or to learn independent models when d is observed). The solution of [2] is to use a low-rank decomposition of the filters, where w = (wo, @) is a function of a filter basis wo and @ is a small set of tunable parameters.
1705.08045#12
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
13
# 3.2 Layer-Wise Error During layer-wise pruning in layer l, the input Yl−1 is fixed as the same as the well-trained network. Suppose we set the q-th element of Θl, denoted by Θl[q] , to be zero, and get a new parameter vector, denoted by ˆΘl. With Yl−1, we obtain a new output for layer l, denoted by ˆYl. Consider the root of 1For simplicity in presentation, we suppose the neural network is a feed-forward (fully-connected) network. In Section 3.4, we will show how to extend our method to filter layers in Convolutional Neural Networks. 3 mean square error between ˆYl and Yl over the whole training data as the layer-wise error: n 1 Le ol l Ll l =) 2H -¥)T6—y)) = ale Yr, ) j=l
1705.07565#13
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
13
Here we build on the same idea, with some important extensions. First, we note that linearly parametrizing a filter bank is the same as introducing a new, intermediate convolutional layer in the network. Specifically, let Fy, € R¥*"1*Cr be a basis of K filters of size Hy x Ws operating on Cy input feature channels. Given parameters [a4] € IR?™**, we can express a bank of T filters as linear combinations G = ic a1, F},. Applying the bank to a tensor x and using associativity and k=1 pplying g y linearity of convolution results in G * x = ry azn (F% * ©) = a * F * x where we interpreted a as alx1xT x K filter bank. While [2] used a slightly different low-rank filter decomposition, their parametrization can also be seen as introducing additional filtering layers in the network. An advantage of this parametrization is that it results in a useful decomposition, where part of the convolutional layers contain the domain-agnostic parameters F and the others contain the domain- specific ones ag. As discussed in section 5, this is particularly useful to address the forgetting problem. In the next section we refine these ideas to obtain an effective parametrization of residual networks. # 3.2 Residual adapter modules
1705.08045#13
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
14
n 1 Le ol l Ll l =) 2H -¥)T6—y)) = ale Yr, ) j=l F is the Frobenius Norm. Note that for any single parameter pruning, one can compute its where error εl ml−1ml, and use it as a pruning criterion. This idea has been adopted by some existing methods [15]. However, in this way, for each parameter at each layer, one has to pass the whole training data once to compute its error measure, which is very computationally expensive. A more efficient approach is to make use of the second order derivatives of the error function to help identify importance of each parameter. We first define an error function E( # ) as · 1 n where Zl is outcome of the weighted sum operation right before performing the activation function ) at layer l of the well-trained neural network, and ˆZl is outcome of the weighted sum operation σ( · after pruning at layer l . Note that Zl is considered as the desired output of layer l before activation. The following lemma shows that the layer-wise error is bounded by the error defined in (3). Lemma 3.1. With the error function (3) and Yl = σ(Zl), the following holds: εl < \/ ≤
1705.07565#14
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
14
# 3.2 Residual adapter modules As an example of parametric network, we propose to modify a standard residual network. Recall that a ResNet is a chain gm © --- 0 g; of residual modules g;. In the simplest variant of the model, each residual module g takes as input a tensor R?*“*C and produces as output a tensor of the same size using g(a; w) = a + ((we *-) 0 [-]4 0 (wy * -))(x). Here w and wy are the coefficients of banks of small linear filters, [z], = max{0, z} is the ReLU operator, w * z is the convolution of z by the filter bank w, and o denotes function composition. Note that, for the addition to make sense, filters must be configured such that the dimensions of the output of the last bank are the same Our goal is to parametrize the ResNet module. As suggested in the previous section, rather than changing the filter coefficients directly, we introduce additional parametric convolutional layers. In fact, we go one step beyond and make them small residual modules in their own right and call them residual adapter modules (blue blocks in fig. 2). These modules have the form: gaa) =x+axn.
1705.08045#14
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
15
Lemma 3.1. With the error function (3) and Yl = σ(Zl), the following holds: εl < \/ ≤ # E(ˆZl). Therefore, to find parameters whose deletion (set to be zero) minimizes (2) can be translated to find parameters those deletion minimizes the error function (3). Following [12, 13], the error function can be approximated by functional Taylor series as follows, + , OE! 1 l l a f T 3 E(Z') — E(Z') = 6E' = (55) 560, 4 538 H,6©, + O (||6@;||*) , (4) l l a f T 3 E(Z') — E(Z') = 6E' = (55) 560, 4 538 H,6©, + O (||6@;||*) , (4) where 6 denotes a perturbation of a corresponding variable, H; = 07 E"/ 0@,? is the Hessian matrix w.r.t. @;, and O(||5@,||*) is the third and all higher order terms. It can be proven that with the error function defined in (3), the first (linear) term SE o.=0; and O(||5@;||?) are equal to 0.
1705.07565#15
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
15
residual adapter modules (blue blocks in fig. 2). These modules have the form: gaa) =x+axn. In order to limit the number of domain-specific parameters, a is selected to be a bank of 1 x 1 filters. A major advantage of adopting a residual architecture for the adapter modules is that the adapters reduce to the identity function when their coefficients are zero. When learning the adapters on small domains, this provides a simple way of controlling over-fitting, resulting in substantially improved performance in some cases. Batch normalization and scaling. Batch Normalization (BN) [15] is an important part of very deep neural networks. This module is usually inserted after convolutional layers in order to normalize their outputs and facilitate learning (fig. 2). The normalization operation is followed by rescaling and shift operations s © a + b, where (s, b) are learnable parameters. In our architecture, we incorporate the BN layers into the adapter modules (fig. 2). Furthermore, we add a BN module right before the adapter convolution layer.' Note that the BN scale and bias parameters are also dataset-dependent — as noted in the experiments, this alone provides a certain degree of model adaptation.
1705.08045#15
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
16
Suppose every time one aims to find a parameter Θl[q] to set to be zero such that the change δEl is minimal. Similar to OBS, we can formulate it as the following optimization problem: min 46@,7H,d@;, st. e150, +@,,, =O, (5) WS 4 (al where eq is the unit selecting vector whose q-th element is 1 and otherwise 0. By using the Lagrange multipliers method as suggested in [21], we obtain the closed-form solutions of the optimal parameter pruning and the resultant minimal change in the error function as follows, (Θl[q] )2 [H−1 ]qq l
1705.07565#16
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
16
Domain-agnostic vs domain-specific parameters. If the residual module of fig. 2 is configured to process an input tensor with C' feature channels, and if the domain-agnostic filters w 1, wa are of size h x hx C, then the model has 2(h?C? + hC) domain-agnostic parameters (including biases in the convolutional layers) and 2(C? + 5C) domain-specific parameters.” Hence, there are approximately h? more domain-agnostic parameters than domain specific ones (usually h? = 9). # 3.3. Sequential learning and avoiding forgetting While in this paper we are not concerned with sequential learning, we have found it to be a good strategy to bootstrap a model when a large number of domains have to be learned. However, the most popular approach to sequential learning, fine-tuning (section 2), is often a poor choice for learning shared representations as it tends to quickly forget the original tasks.
1705.08045#16
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
17
Here Lq is referred to as the sensitivity of parameter Θl[q] . Then we select parameters to prune based on their sensitivity scores instead of their magnitudes. As mentioned in section 2, magnitude-based criteria which merely consider the numerator in (6) is a poor estimation of sensitivity of parameters. Moreover, in (6), as the inverse Hessian matrix over the training data is involved, it is able to capture data distribution when measuring sensitivities of parameters. After pruning the parameter, Θl[q] , with the smallest sensitivity, the parameter vector is updated via ˆΘl = Θl +δΘl. With Lemma 3.1 and (6), we have that the layer-wise error for layer l is bounded by < \E(@!) = \E(@) — E(Z!) = V6E! = HO tol (7) 218 laa Note that first equality is obtained because of the fact that E(Zl) = 0. It is worth to mention that though we merely focus on layer l, the Hessian matrix is still a square matrix with size of ml−1ml for each layer in Section 3.4. 4 # 3.3 Layer-Wise Error Propagation and Accumulation
1705.07565#17
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
17
The challenge in learning without forgetting is to maintain information about older tasks as new ones are learned (section 2). With respect to forgetting, our adapter modules are similar to the tower model [33] as they preserve the original model exactly: one can pre-train the domain-agnostic parameters w on a large domain such as ImageNet, and then fine-tune only the domain-specific parameters aq for each new domain. Like the tower method, this preserves the original task exactly, but it is far less expensive as it does not require to introduce new feature channels for each new domain (a quadratic cost). Furthermore, the residual modules naturally reduce to the identity function when sufficient shrinking regularization is applied to the adapter weights a’. This allows the adapter to be tuned depending on the availability of data for a target domain, sometimes significantly reducing overfitting. # 4 Visual decathlon In this section we introduce a new benchmark, called visual decathlon, to evaluate the performance of algorithms in multiple-domain learning. The goal of the benchmark is to assess whether a method can successfully learn to perform well in several different domains at the same time. We do so by choosing ten representative visual domains, from Internet images to characters, as well as by selecting an evaluation metric that rewards performing well on all tasks.
1705.08045#17
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
18
4 # 3.3 Layer-Wise Error Propagation and Accumulation So far, we have shown how to prune parameters for each layer, and estimate their introduced errors independently. However, our aim is to control the consistence of the network’s final output YL before and after pruning. To do this, in the following, we show how the layer-wise errors propagate to final output layer, and the accumulated error over multiple layers will not explode. Theorem 3.2. Given a pruned deep network via layer-wise pruning introduced in Section 3.2, each layer has its own layer-wise error εl for 1 L, then the accumulated error of ultimate network ˜YL output ˜εL = 1√ # : || obeys: − lA L-1 L ( Il \@u-v BEF) + VOEE, (8) : l=k+1 where Y! = o(W/Y"-}), for 2 <1 < L denotes ‘accumulated pruned output’ of layer l, and Y!=o0(W]X).
1705.07565#18
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
18
Datasets. The decathlon challenge combines ten well-known datasets from multiple visual domains: FGVC-Aircraft Benchmark [24] contains 10,000 images of aircraft, with 100 images for each of 100 different aircraft model variants such as Boeing 737-400, Airbus A310. CIFAR100 [19] contains 60,000 32 x 32 colour images for 100 object categories. Daimler Mono Pedestrian Classification Benchmark (DPed) [26] consists of 50,000 grayscale pedestrian and non-pedestrian images, cropped and resized to 18 x 36 pixels. Describable Texture Dataset (DTD) [7] is a texture database, con- sisting of 5640 images, organized according to a list of 47 terms (categories) such as bubbly, cracked, 'While the bias and scale parameters of the latter can be incorporated in the following filter bank, we found it easier to leave them separated from the latter 7Including all bias and scaling vectors; 2(C? + 3C) if these are absorbed in the filter banks when possible.
1705.08045#18
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
19
Theorem|3.2|shows that: 1) Layer-wise error for a layer / will be scaled by continued multiplication of paramet Frobenius Norm over the following layers when it propagates to final output, i.e., the L—1 layers after the /-th layer; 2) The final error of ultimate network output is bounded by the weighted sum of layer-wise errors. The proof of Theorem[3.2|can be found in Appendix. Consider a general case with 9) and { (8): ): parameter ©;,,. who has the smallest sensitivity in layer / is pruned by the i-th pruning operation, and this finally adds Me ma ]Ox ||e VOL! to the ultimate network output error. It is worth to mention that although it seems that the layer-wise error is scaled by a quite large product factor, S; = Tis ||x||,~ when it propagates to the final layer, this scaling is still tractable in practice because ultimate network output is also scaled by the same product factor compared with the output of layer /. For example, we can easily estimate the norm of ultimate network output via, || Y¥”||~ + $;||¥+||-. If one pruning operation in the Ist layer causes the layer-wise error V6E', then the relative ultimate output error is
1705.07565#19
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
19
7Including all bias and scaling vectors; 2(C? + 3C) if these are absorbed in the filter banks when possible. marbled. The German Traffic Sign Recognition (GTSR) Benchmark [36] contains cropped im- ages for 43 common traffic sign categories in different image resolutions. Flowers102 [28] is a fine-grained classification task which contains 102 flower categories from the UK, each consisting of between 40 and 258 images. ILSVRC12 (ImNet) [32] is the largest dataset in our benchmark con- tains 1000 categories and 1.2 million images. Omniglot [20] consists of 1623 different handwritten characters from 50 different alphabets. Although the dataset is designed for one-shot learning, we use the dataset for standard multi-class classification task and include all the character categories in train and test splits. The Street View House Numbers (SVHN) [27] is a real-world digit recognition dataset with around 70,000 32 x 32 images. UCF101 [35] is an action recognition dataset of realistic human action videos, collected from YouTube. It contains 13,320 videos for 101 action categories. In order to make this dataset compatible with our benchmark, we convert the videos into images by using the Dynamic Image encoding of [3] which summarizes each video into an image based on a ranking principle.
1705.08045#19
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
20
eb |¥"-Y" lp | VOB" ‘ I¥4le 7 |F¥'le Thus, we can see that even S, may be quite large, the relative ultimate output error would still be about V6E"/|| ty! ||” which is controllable in practice especially when most of modern deep networks adopt maxout layer as ultimate output. Actually, So is called as network gain representing the ratio of the magnitude of the network output to the magnitude of the network input. # 3.4 The Proposed Algorithm # 3.4.1 Pruning on Fully-Connected Layers To selectively prune parameters, our approach needs to compute the inverse Hessian matrix at each layer to measure the sensitivities of each parameter of the layer, which is still computationally expensive though tractable. In this section, we present an efficient algorithm that can reduce the size of the Hessian matrix and thus speed up computation on its inverse.
1705.07565#20
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
20
Challenge and evaluation. Each dataset Dz, d = 1,..., 10 is formed of pairs (a, y) € Dy where x is an image and y € {1,...,Ca} = Yu is a label. For each dataset, we specify a training, validation and test subsets. The goal is to train the best possible model to address all ten classification tasks using only the provided training and validation data (no external data is allowed). A model ® is evaluated on the test data, where, given an image « and its ground-truth domain d, label, it has to predict the corresponding label y = ®(, dz) € Va. Performance is measured in terms of a single scalar score S determined as in the decathlon discipline. Performing well at this metric requires algorithms to perform well in all tasks, compared to a minimum level of baseline performance for each. In detail, S is computed as follows: 10 1 Ss > agmax{0, ET™* — Eq}, Ea= [Dey > 1{y40(2,a)}- () d=1 d ! (wy)eDie"
1705.08045#20
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
21
For each layer l, according to the definition of the error function used in Lemma 3.1, the first derivative of the error function with respect to ˆΘl is ∂El j and ∂Θl j are the j-th columns of the matrices ˆZl and Zl, respectively, and the Hessian matrix is defined as: zl + wet issn (2h (08) ot gt giv a B= 36,7 =n ja ( 30; — xen? (4 z)' ]. Note that for most cases 2; is quite + ∂2zl ∂(Θl)2 (ˆzl j j, we simply ignore the term containing ˆzl j − ≡ close to zl zl j. Even in the late-stage of pruning when this difference is not small, we can still ignore the corresponding term [13]. For layer l that has ml output units, zl nom dz; wT H, = H} = ; 9 PEM AEE oe (v0) " 5 He Rex? Hu, H22, H33 € R**4
1705.07565#21
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
21
where Ey is the average test error for each domain. E* the baseline error (section 5), above which no points are scored. The exponent yg > 1 rewards more reductions of the classification error as this becomes close to zero and is set to yqg = 2 for all domains. The coefficient aq is set to 1,000 (E7**)~4 so that a perfect result receives a score of 1,000 (10,000 in total). Data preprocessing. Different domains contain a different set of image classes as well as a different number of images. In order to reduce the computational burden, all images have been resized isotropically to have a shorter side of 72 pixels. For some datasets such as ImageNet, this is a substantial reduction in resolution which makes training models much faster (but still sufficient to obtain excellent classification results with baseline models). For the datasets for which there exists training, validation, and test subsets, we keep the original splits. For the rest, we use 60%, 20% and 20% of the data for training, validation, and test respectively. For the ILSVRC12, since the test labels are not available, we use the original validation subset as the test subset and randomly sample a new validation set from their training split. We are planning to make the data and an evaluation server public soon. # 5 Experiments
1705.08045#21
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
22
nom dz; wT H, = H} = ; 9 PEM AEE oe (v0) " 5 He Rex? Hu, H22, H33 € R**4 Figure 1: Illustration of shape of Hessian. For feed-forward neural networks, unit z; gets its activation via forward propagation: z = Wy, where W € R**3, y = [yi, yo, y3, yal’ € R**!, and z = [z1, 22, 23]! € R®*!. Then the Hessian matrix of 21 w.r.t. all parameters is denoted by H'™!, As illustrated in the figure, H!*1]’s elements are zero except for those corresponding to W.. (the 1st column of W), which is denoted by Hj. H2] and H's! are similar. More importantly, H~! = diag(H7), H5), Hj), and Hy, = Ho2 = Hg3. As a result, one only needs to compute H; to obtain H~! which significantly reduces computational complexity.
1705.07565#22
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
22
# 5 Experiments In this section we evaluate our method quantitatively against several baselines (section 5.1), investigate the ability of the proposed techniques to learn models for ten very diverse visual domains. Implementation details. In all experiments we choose to use the powerful ResNets [13] as base architectures due to their remarkable performance. In particular, as a compromise of accuracy and speed, we chose the ResNet28 model [40] which consists of three blocks of four residual units. Each residual unit contains 3 x 3 convolutional, BN and ReLU modules (fig. 2). The network accepts 64 x 64 images as input, downscales the spatial dimensions by two at each block and ends with a global average pooling and a classifier layer followed by a softmax. We set the number of filters to 64, 128, 256 for these blocks respectively. Each network is optimized to minimize its cross-entropy loss with stochastic gradient descent. The network is run for 80 epochs and the initial learning rate of 0.1 is lowered to 0.01 and then 0.001 gradually. # Model | #par.|ImNet Airc. C100 DPed DTD GTSR_ Flwr OGIt SVHN UCF|mean| S
1705.08045#22
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
23
where the Hessian matrix for a single instance j at layer 1, Hj, is a block diagonal square matrix L Oz; of the size mj_1 m);. Specifically, the gradient of the first output unit ay w.s.t. ©; is 6, = az; 0215 Owi? °°? OWm, , where w; is the i-th column of W;. As ay is the layer output before activation function, its gradient is simply to calculate, and more importantly all output units’s gradients are L zh d. I-1; . < dw. =¥; if k =i, otherwise owe = equal to the layer input: 5 a =0. An illustrated example is shown in Figure[I] where we ignore the scripts 7 and / for simplicity in presentation. equal to the layer input: Figure 1, where we ignore the scripts j and l for simplicity in presentation. It can be shown that the block diagonal square matrix Hj where 1 block diagonal square matrix with its diagonal blocks being ( 1 n Ψl = 1 n matrix identity [13]:
1705.07565#23
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
23
# images 1.3m 7k S0k 30k 4k 40k 2k 26k = 70k 9k Scratch 10x| 59.87 57.10 75.73 91.20 37.77 96.55 56.30 88.74 96.63 43.27 | 70.32 | 1625 Scratch+ 11x] 59.67 59.59 76.08 92.45 39.63 96.90 56.66 88.74 96.78 44.17 | 71.07 | 1826 Feature extractor 1x] 59.67 23.31 63.11 80.33 45.37 68.16 73.69 58.79 43.54 26.80 | 54.28] 544 Finetune 10x| 59.87 60.34 82.12 92.82 55.53 97.53 81.41 87.69 96.55 51.20] 76.51 | 2500 LwF [21] 10x} 59.87 61.15 82.23 92.34 58.83 97.57 83.05 88.08 96.10 50.04 | 76.93 | 2515 BN adapt. [5] ~ 1x| 59.87 43.05 78.62 92.07 51.60 95.82 74.14 84.83 94.10 43.51 | 71.76 | 1363 Res. adapt. 2x} 59.67
1705.08045#23
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
24
It can be shown that the block diagonal square matrix H's diagonal blocks Hi, eR™1xm-1, . j _ ~1y\T . : : -1: where 1 < i < my, are all equal to wy =y} Ny} 1) , and the inverse Hessian matrix H, lis alsoa block diagonal square matrix with its diagonal blocks being (4 Vyat yy )-1 In addition, normally w= 4 yal ap) is degenerate and its pseudo-inverse can be calculated recursively via Woodbury matrix identity [13): wy tyl-1(yl-1) "ply t -1 (¥) yy e) ~ —1\T =1 j_ n+ (¥i41) v') You where ©! =15~_, ah! with (Wh) | =a, a € (104, 105], and (W!)' =(!,) The is then reduced to mj_1, and the computational complexity of calculating H;' is O (nm?_,). -1 (i) = (8) , −1 −1 # size of W! # l l−1
1705.07565#24
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
24
92.07 51.60 95.82 74.14 84.83 94.10 43.51 | 71.76 | 1363 Res. adapt. 2x} 59.67 56.68 81.20 93.88 50.85 97.05 66.24 89.62 96.13 47.45 | 73.88 | 2118 Res. adapt. decay 2x} 59.67 61.87 81.20 93.88 57.13 97.57 81.67 89.62 96.13 50.12 | 76.89 | 2621 Res. adapt. finetune all] 2x| 59.23 63.73 81.31 93.30 57.02 97.47 83.43 89.82 96.17 50.28 | 77.17 | 2643 Res. adapt. dom-pred | 2.5x| 59.18 63.52 81.12 93.29 54.93 97.20 82.29 89.82 95.99 50.10 | 76.74 | 2503 Res. adapt. (large) ~ 12x] 67.00 67.69 84.69 94.28 59.41 97.43 84.86 89.92 96.59 52.39 | 79.43 | 3131
1705.08045#24
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
25
, −1 −1 # size of W! # l l−1 To make the estimated minimal change of the error function optimal in (6), the layer-wise Hessian matrices need to be exact. Since the layer-wise Hessian matrices only depend on the corresponding layer inputs, they are always able to be exact even after several pruning operations. The only parameter we need to control is the layer-wise error εl. Note that there may be a “pruning inflection point” after which layer-wise error would drop dramatically. In practice, user can incrementally increase the size of pruned parameters based on the sensitivity Lq, and make a trade-off between the pruning ratio and the performance drop to set a proper tolerable error threshold or pruning ratio. The procedure of our pruning algorithm for a fully-connected layer l is summarized as follows. Step 1: Get layer input yl−1 from a well-trained deep network. Step 2: Calculate the Hessian matrix Hlii, for i = 1, ..., ml, and its pseudo-inverse over the dataset, and get the whole pseudo-inverse of the Hessian matrix. Step 3: Compute optimal parameter change 6@, and the sensitivity L, for each parameter at layer /. Set tolerable error threshold . 6
1705.07565#25
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
25
Table 1: Multiple-domain networks. The figure reports the (top-1) classification accuracy (%) of different models on the decathlon tasks and final decathlon score (5). ImageNet is used to prime the network in every case, except for the networks trained from scratch. The model size is the number of parameters w.r.t. the baseline ResNet. The fully-finetuned model, written blue, is used as a baseline to compute the decathlon score. Model | Aire. C100 DPed DTD GTSR Flwr OGIt SVHN UCF
1705.08045#25
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
26
Step 3: Compute optimal parameter change 6@, and the sensitivity L, for each parameter at layer /. Set tolerable error threshold . 6 Step 4: Pick up parameters ©/,,,’s with the smallest sensitivity scores. Step 5: If \/L, < €, prune the parameter Gi.) *s and get new parameter €, prune the parameter Gi.) *s and get new parameter values via [oF = 0;+ 060), ≤ then repeat Step 4; otherwise stop pruning. # 3.4.2 Pruning on Convolutional Layers
1705.07565#26
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
26
Model | Aire. C100 DPed DTD GTSR Flwr OGIt SVHN UCF Finetune 1.1 60.3 3.6 63.1 0.6 80.3 0.7 45.3 1.4 68.1 27.2 73.6 13.4 87.7 0.2 96.6 5.4 51.2 LwF [21] high Ir 4.1 61.1 21.0 82.2 23.8 92.3 36.7 58.8 11.5 97.6 34.2 83.1 3.0 88.1 0.2 96.1 18.6 50.0 LwF [21] low Ir 38.0 50.6 33.0 80.7 53.3 92.2 47.0 57.2 23.7 96.6 45.7 75.7 21.0 86.0 13.3 94.8 29.0 44.6 Res. adapt. finetune all] 59.2 63.7 59.2 81.3 59.2 93.3 59.2 57.0 59.2 97.5 59.2 83.4 59.2 89.8 59.2 96.1 59.2 50.3
1705.08045#26
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
27
≤ then repeat Step 4; otherwise stop pruning. # 3.4.2 Pruning on Convolutional Layers It is straightforward to generalize our method to a convolutional layer and its variants if we vectorize filters of each channel and consider them as special fully-connected layers that have multiple inputs (patches) from a single instance. Consider a vectorized filter wi of channel i, 1 ml, it acts similarly to parameters which are connected to the same output unit in a fully-connected layer. However, the difference is that for a single input instance j, every filter step of a sliding window across of it will extract a patch Cjn from the input volume. Similarly, each pixel zl ijn in the 2-dimensional activation map that gives the response to each patch corresponds to one output unit in a fully-connected ∂zl layer. Hence, for convolutional layers, (9) is generalized as Hl = 1 ∂[w1,...,wml ] , n where Hl is a block diagonal square matrix whose diagonal blocks are all the same. Then, we can slightly revise the computation of the Hessian matrix, and extend the algorithm for fully-connected layers to convolutional layers.
1705.07565#27
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
27
Table 2: Pairwise forgetting. Each pair of numbers report the top-1 accuracy (%) on the old task (ImageNet) and a new target task after the network is fully finetuned on the latter. We also show the performance of LWF when it is finetuned on the new task with a high and low learning rate, trading-off forgetting ImageNet and improving the results on the target domain. By comparison, we show the performance of tuning only the residual adapters, which by construction does not result in any performance loss in ImageNet while still achieving very good performance on each target task. # 5.1 Results There are two possible extremes. The first one is to learn ten independent models, one for each dataset, and the second one is to learn a single model where all feature extractor parameters are shared between the ten domains. We evaluate next different approaches to learn such models. Pairwise learning. In the first experiment (table 1), we start by learning a ResNet model on ImageNet, and then use different techniques to extend it to the remaining nine tasks, one at a time. Depending on the method, this may produce an overall model comprising ten ResNet architectures, or just one ResNet with a few domain-specific parameters; thus we also report the total number of parameters used, where 1 x is the size of a single ResNet (excluding the last classification layer, which can never be shared).
1705.08045#27
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
28
Note that the accumulated error of ultimate network output can be linearly bounded by layer-wise error as long as the model is feed-forward. Thus, L-OBS is a general pruning method and friendly with most of feed-forward neural networks whose layer-wise Hessian can be computed expediently with slight modifications. However, if models have sizable layers like ResNet-101, L-OBS may not be economical because of computational cost of Hessian, which will be studied in our future work. # 4 Experiments
1705.07565#28
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
28
As baselines, we evaluate four cases: i) learning an individual ResNet model from scratch for each task, ii) freezing all the parameters of the pre-trained network, using the network as feature extractor and only learn a linear classifier, iii) standard finetuning and iv) applying a reimplementation of the LwF technique of [21] that encourages the fine-tuned network to retain the responses of the original ImageNet model while learning the new task. In terms of accuracy, learning from scratch performs poorly on small target datasets and, by learning 10 independent models, requires 10x parameters in total. Freezing the ImageNet feature extraction is very efficient in terms of parameter sharing (1 x parameters in total), preserves the original domain exactly, but generally performs very poorly on the target domain. Full fine-tuning leads to accurate results both for large and small datasets; however, it also forgets the ImageNet domain substantially (table 2), so it still requires learning 10 complete ResNet models for good overall performance.
1705.08045#28
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
29
In this section, we verify the effectiveness of our proposed Layer-wise OBS (L-OBS) using various architectures of deep neural networks in terms of compression ratio (CR), error rate before retraining, and the number of iterations required for retraining to resume satisfactory performance. CR is defined as the ratio of the number of preserved parameters to that of original parameters, lower is better. We conduct comparison results of L-OBS with the following pruning approaches: 1) Randomly pruning, 2) OBD [12], 3) LWC [9], 4) DNS [11], and 5) Net-Trim [6]. The deep architectures used for experiments include: LeNet-300-100 [2] and LeNet-5 [2] on the MNIST dataset, CIFAR-Net2 [24] on the CIFAR-10 dataset, AlexNet [25] and VGG-16 [3] on the ImageNet ILSVRC-2012 dataset. For experiments, we first well-train the networks, and apply various pruning approaches on networks to evaluate their performance. The retraining batch size, crop method and other hyper-parameters are under the same setting as used in LWC.
1705.07565#29
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
29
When LwF is run as intended by the original authors [21], is still leads to a noticeable performance drop on the original task, even when learning just two domains (table 2), particularly if the target domain is very different from ImageNet (e.g. Omniglot and SVHN). Still, if one chooses a different trade-off point and allows the method to forget ImageNet more, it can function as a good regularizer that slightly outperforms vanilla fine-tuning overall (but still resulting in a 10x model). Next, we evaluate the effect of sharing the majority of parameters between tasks, whereas still allowing a small number of domain-specific parameters to change. First, we consider specializing only the BN layer scaling and bias parameters, which is equivalent to the approach of [5]. In this case, less than the 0.1% of the model parameters are domain-specific (for the ten domains, this results in a model with 1.01 parameters overall). Hence the model is very similar to the one with the frozen feature extractor; nevertheless, the performances increase very substantially in most cases (e.g. 23.31% — 43.05% accuracy on Aircraft).
1705.08045#29
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
30
apply various pruning approaches on networks to evaluate their performance. The retraining batch size, crop method and other hyper-parameters are under the same setting as used in LWC. Note that to make comparisons fair, we do not adopt any other pruning related methods like Dropout or sparse regularizers on MNIST. In practice, L-OBS can work well along with these techniques as shown on CIFAR-10 and ImageNet.
1705.07565#30
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
30
As the next step, we introduce the residual adapter modules, which increase by 11% the number of parameters per domain, resulting in a 2x model. In the pre-training phase, we first pretrain on ImageNet the network with the added modules. Then, we freeze the task agnostic parameters and train the task specific parameters on the different datasets. Differently from vanilla fine-tuning, there is no forgetting in this setting. While most of the parameters are shared, our method is either close or better than full fine-tuning. As a further control, we also train 10 models from scratch with the added parameters (denoted as Scratch+), but do not observe any noticeable performance gain in average, demonstrating that parameters sharing is highly beneficial. We also contrast learning the adapter modules with two values of weight decay (0.002 and 0.005) higher than the default 0.0005. These parameters are obtained after a coarse grid search using cross-validation for each dataset. Using higher decay significantly improves the performance on smaller datasets such as Flowers, whereas the smaller decay is best for larger datasets. This shows both the importance and utility of controlling overfitting in the adaptation process. In practice, there is an almost direct correspondence between the size of the data and which one of these values to use. The optimal decay can be selected via validation, but a rough choice can be performed by simply looking at the dataset size.
1705.08045#30
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
31
# 4.1 Overall Comparison Results The overall comparison results are shown in Table 1. In the first set of experiments, we prune each layer of the well-trained LeNet-300-100 with compression ratios: 6.7%, 20% and 65%, achieving slightly better overall compression ratio (7%) than LWC (8%). Under comparable compression ratio, L-OBS has quite less drop of performance (before retraining) and lighter retraining compared with LWC whose performance is almost ruined by pruning. Classic pruning approach OBD is also compared though we observe that Hessian matrices of most modern deep models are strongly non-diagonal in practice. Besides relative heavy cost to obtain the second derivatives via the chain rule, OBD suffers from drastic drop of performance when it is directly applied to modern deep models. To properly prune each layer of LeNet-5, we increase tolerable error threshold ¢ from relative small initial value to incrementally prune more parameters, monitor model performance, stop pruning and set € until encounter the “pruning inflection point” mentioned in Section In practice, we prune each layer of LeNet-5 with compression ratio: 54%, 43%, 6% and 25% and retrain pruned model with 2A revised AlexNet for CIFAR-10 containing three convolutional layers and two fully connected layers. 7
1705.07565#31
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
31
We also compare to another baseline where we only finetune the last two convolutional layers and freeze the others, which may be thought to be generic. This amounts to having a network with twice the number of total parameters in a vanilla ResNet which is equal to our proposed architecture. This model obtains 64.7% mean accuracy over ten datasets, which is significantly lower than our 73.9%, likely due to overfitting (controlling overfitting is one of the advantages of our technique). Furthermore, we also assess the quality of our adapter without residual connections, which corre- sponds to the low rank filter parametrization of section 3.1; this approach achieves an accuracy of 70.3%, which is worse than our 73.9%. We also observe that this configuration requires notably more iterations to converge. Hence, the residual architecture for the adapters results in better performances, better control of overfitting, and a faster convergence.
1705.08045#31
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.08045
32
End-to-end learning. So far, we have shown that our method, by learning only the adapter modules for each new domain, does not suffer from forgetting. However, for us sequential learning is just a scalable learning strategy. Here, we also show (table 1) that we can further improve the results by fine-tuning all the parameters of the network end-to-end on the ten tasks. We do so by sampling a batch from each dataset in a round robin fashion, allowing each domain to contribute to the shared parameters. A final pass is done on the adapter modules to take into account the change in the shared parameters. Domain prediction. Up to now we assume that the domain of each image is given during test time for all the methods. If this is unavailable, it can be predicted on the fly by means of a small neural-network predictor. We train a light ResNet, which is composed three stacks of two residual networks, half deep as the original net, obtaining 99.8% accuracy in domain prediction, resulting in a barely noticeable drop in the overall multiple-domain challenge (see Res. adapt dom-pred in table 1). Note that similar performance drop would be observed for the other baselines.
1705.08045#32
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
33
Method Networks Original error CR Err. after pruning Re-Error #Re-Iters. Random OBD LWC DNS L-OBS L-OBS (iterative) LeNet-300-100 LeNet-300-100 LeNet-300-100 LeNet-300-100 LeNet-300-100 LeNet-300-100 1.76% 1.76% 1.76% 1.76% 1.76% 1.76% 8% 8% 8% 1.8% 7% 1.5% 85.72% 86.72% 81.32% - 3.10% 2.43% 2.25% 1.96% 1.95% 1.99% 1.82% 1.96% 3.50 × 105 8.10 × 104 1.40 × 105 3.40 × 104 510 643 OBD LWC DNS L-OBS L-OBS (iterative) LeNet-5 LeNet-5 LeNet-5 LeNet-5 LeNet-5 1.27% 1.27% 1.27% 1.27% 1.27% 8% 8% 0.9% 7% 0.9% 86.72% 89.55% - 3.21% 2.04% 2.65% 1.36% 1.36% 1.27% 1.66% 2.90 × 105
1705.07565#33
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
33
Decathlon evaluation: overall performance. While so far we have looked at results on individual domain, the Decathlon score eq. (1) can be used to compare performance overall. As baseline error rates in eq. (1), we double the error rates of the fully finetuned networks on each task. In this manner, this 10x model achieves a score of 2,500 points (over 10,000 possible ones, see eq. (1)). The last column of table 1 reports the scores achieved by the other architectures. As intended, the decathlon score favors the methods that perform well overall, emphasizes their consistency rather than just their average accuracy. For instance, although the Res. adapt. model (trained with single decay coefficient for all domains) performs well in terms of average accuracy (73.88%), its decathlon score (2118) is relatively low because the model performs poorly in DTD and Flowers. This also shows that, once the weight decays are configured properly, our model achieves superior performance (2643 points) to all the baselines using only 2x the capacity of a single ResNet.
1705.08045#33
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
34
89.55% - 3.21% 2.04% 2.65% 1.36% 1.36% 1.27% 1.66% 2.90 × 105 9.60 × 104 4.70 × 104 740 841 LWC L-OBS CIFAR-Net CIFAR-Net 18.57% 18.57% 9% 9% 87.65% 21.32% 19.36% 18.76% 1.62 × 105 1020 DNS LWC L-OBS AlexNet (Top-1 / Top-5 err.) AlexNet (Top-1 / Top-5 err.) AlexNet (Top-1 / Top-5 err.) 43.30 / 20.08% 43.30 / 20.08% 43.30 / 20.08% 5.7% 11% 11% 43.91 / 20.72% - 76.14 / 57.68% 44.06 / 20.64% 50.04 / 26.87% 43.11 / 20.01% 7.30 × 105 5.04 × 106 1.81 × 104 DNS LWC L-OBS (iterative) VGG-16 (Top-1 / Top-5 err.) VGG-16 (Top-1 / Top-5 err.) VGG-16 (Top-1 / Top-5 err.) 31.66 / 10.12% 31.66
1705.07565#34
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
34
Finally we show that using a higher capacity ResNet28 (12x, ResNet adapt. (large) in table 1), which is comparable to 10 independent networks, significantly improves our results and outperforms the finetuning baseline by 600 point in decathlon score. As a matter of fact, this model outperforms the state-of-the-art [40] (81.2%) by 3.5 points in CIFAR100. In other cases, our performances are in general in line to current state-of-the-art methods. When this is not the case, this is due to reduced image resolution (ImageNet, Flower) or due to the choice of a specific video representation in UCF (dynamic image). # 6 Conclusions As machine learning applications become more advanced and pervasive, building data representations that work well for multiple problems will become increasingly important. In this paper, we have introduced a simple architectural element, the residual adapter module, that allows compressing many visual domains in relatively small residual networks, with substantial parameter sharing between them. We have also shown that they allow addressing the forgetting problem, as well as adapting to target domain for which different amounts of training data are available. Finally, we have introduced anew multi-domain learning challenge, the Visual Decathlon, to allow a systematic comparison of algorithms for multiple-domain learning.
1705.08045#34
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.08045
35
Acknowledgments: This work acknowledges the support of Mathworks/DTA DFR02620 and ERC 677195- IDIU. # References 1] A. Argyriou, T. Evgeniou, and M Pontil. Multi-task feature learning. In Proc. NIPS, volume 19, page 41. MIT; 1998, 2007. 2] L. Bertinetto, J. F. Henriques, J. Valmadre, P. Torr, and A. Vedaldi. Learning feed-forward one-shot learners. In Proc. NIPS, pages 523-531, 2016. 3] H. Bilen, B. Fernando, E. Gavves, A. Vedaldi, and S. Gould. Dynamic image networks for action recognition. In Proc. CVPR, 2016. 4] H. Bilen and A. Vedaldi. Integrated perception with recurrent multi-task neural networks. In Proc. NIPS, 2016. 5] H. Bilen and A. Vedaldi. Universal representations: The missing link between faces, text, planktons, and cat breeds. arXiv preprint arXiv: 1701.07275, 2017. R. Caruana. Multitask learning. Machine Learning, 28, 1997.
1705.08045#35
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
36
much fewer iterations compared with other methods (around 1 : 1000). As DNS retrains the pruned network after every pruning operation, we are not able to report its error rate of the pruned network before retraining. However, as can be seen, similar to LWC, the total number of iterations used by DNS for rebooting the network is very large compared with L-OBS. Results of retraining iterations of DNS are reported from [11] and the other experiments are implemented based on TensorFlow [26]. In addition, in the scenario of requiring high pruning ratio, L-OBS can be quite flexibly adopted to an iterative version, which performs pruning and light retraining alternatively to obtain higher pruning ratio with relative higher cost of pruning. With two iterations of pruning and retraining, L-OBS is able to achieve as the same pruning ratio as DNS with much lighter total retraining: 643 iterations on LeNet-300-100 and 841 iterations on LeNet-5.
1705.07565#36
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
36
R. Caruana. Multitask learning. Machine Learning, 28, 1997. 7) M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi. Describing textures in the wild. In Proc. CVPR, 2014. 8] R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In icml, pages 160-167. ACM, 2008. 9] H. Daumé III. Frustratingly easy domain adaptation. ACL 2007, page 256, 2007. 10] T. Evgeniou and M. Pontil. Regularized multi-task learning. In SIGKDD, pages 109-117. ACM, 2004. 11] R.M. French. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4):128-135, 1999. 12] Y. Ganin and V. Lempitsky. Unsupervised domain adaptation by backpropagation. Proc. ICML, 2015. 13] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In Proc. ECCYV, pages 630-645. Springer, 2016.
1705.08045#36
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
37
Regarding comparison experiments on CIFAR-Net, we first well-train it to achieve a testing error of 18.57% with Dropout and Batch-Normalization. We then prune the well-trained network with LWC and L-OBS, and get the similar results as those on other network architectures. We also observe that LWC and other retraining-required methods always require much smaller learning rate in retraining. This is because representation capability of the pruned networks which have much fewer parameters is damaged during pruning based on a principle that number of parameters is an important factor for representation capability. However, L-OBS can still adopt original learning rate to retrain the pruned networks. Under this consideration, L-OBS not only ensures a warm-start for retraining, but also finds important connections (parameters) and preserve capability of representation for the pruned network instead of ruining model with pruning.
1705.07565#37
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
37
14] J. T. Huang, J. Li, D. Yu, L. Deng, and Y. Gong. Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In JCASSP, pages 7304-7308, 2013. 15 S. loffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, 2015. 16 X. Jia, B. De Brabandere, T. Tuytelaars, and L. Gool. Dynamic filter networks. In Proc. NIPS, pages 667-675, 2016. 17 J. Kirkpatrick, E. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. National Academy of Sciences, 2017. 18 I. Kokkinos. Ubernet: Training auniversal’ convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory. Proc. CVPR, 2017. 19 A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
1705.08045#37
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
38
Regarding AlexNet, L-OBS achieves an overall compression ratio of 11% without loss of accuracy with 2.9 hours on 48 Intel Xeon(R) CPU ES5-1650 to compute Hessians and 3.1 hours on NVIDIA Tian X GPU to retrain pruned model (i.e. 18.1K iterations). The computation cost of the Hessian inverse in L-OBS is negligible compared with that on heavy retraining in other methods. This claim can also be supported by the analysis of time complexity. As mentioned in Section3.4] the time complexity of calculating H,! isO (nmj_,). Assume that neural networks are retrained via SGD, then the approximate time complexity of retraining is O (IdM), where d is the size of the mini-batch, M and J are the total numbers of parameters and iterations, respectively. By considering that M = > hoe (m7_,). and retraining in other methods always requires millions of iterations (Id > n) as shown in experiments, complexity of calculating the Hessian (inverse) in L-OBS is quite economic. More interestingly, there is a trade-off between compression ratio and pruning (including retraining)
1705.07565#38
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
38
19 A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009. 20 B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266): 1332-1338, 2015. 21 Z. Li and D. Hoiem. Learning without forgetting. In Proc. ECCV, pages 614-629, 2016. 22 X. Liu, J. Gao, X. He, L. Deng, K. Duh, and Y. Wang. Representation learning using multi-task deep neural networks for semantic classification and information retrieval. In HLT-NAACL, pages 912-921, 2015. 23 M. Long, H. Zhu, J. Wang, and M. I. Jordan. Unsupervised Domain Adaptation with Residual Transfer Networks. In Proc. NIPS, pages 136-144, 2016. 24 S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. Technical report, 2013. 25 T. Mitchell. Never-ending learning. Technical report, DTIC Document, 2010.
1705.08045#38
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
39
the Hessian (inverse) in L-OBS is quite economic. More interestingly, there is a trade-off between compression ratio and pruning (including retraining) cost. Compared with other methods, L-OBS is able to provide fast-compression: prune AlexNet to 16% of its original size without substantively impacting accuracy (pruned top-5 error 20.98%) even without any retraining. We further apply L-OBS to VGG-16 that has 138M parameters. To achieve more promising compression ratio, we perform pruning and retraining alteratively twice. As can be seen from the table, L-OBS achieves an overall compression ratio of 7.5% without loss
1705.07565#39
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
39
25 T. Mitchell. Never-ending learning. Technical report, DTIC Document, 2010. 26 S. Munder and D. M. Gavrila. An experimental study on pedestrian classification. PAMI, 28(11):1863-1868, 2006. 27 Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in nat- ural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. 28 M-E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In ICCVGIP, Dec 2008. 29 Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. CNN features off-the-shelf: an astounding baseline for recognition. In CVPR DeepVision Workshop, 2014. 30 S. A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert. icarl: Incremental classifier and representation learning. In Proc. CVPR, 2017. 31 Amir Rosenfeld and John K Tsotsos. Incremental learning through deep adaptation. arXiv preprint arXiv: 1705.04228, 2017.
1705.08045#39
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
40
8 x 108 1.00 4 ——_Net-Jri 0.95 ge i2 s a 7 @ 0.90 a 1.0 uy Methow a & & 0.85 3 0.8 z 0.80 5 06 5 0.75 5 8 204 2 0.70 3 2 0.65 02 0.60 00 © 7 5 03 04 05 0.6 07 08 0.9 1.0 10 10 10° Compression Rate Number of data sample # (a) Top-5 test accuracy of L-OBS on ResNet-50 under different compression ratios. # (b) Memory Comparion between L-OBS and Net- Trim on MNIST. Table 2: Comparison of Net-Trim and Layer-wise OBS on the second layer of LeNet-300-100. Method ξ2 r Pruned Error CR Method ξ2 r Pruned Error CR Net-Trim L-OBS L-OBS 0.13 0.70 0.71 13.24% 11.34% 10.83% 19% 3.4% 3.8% Net-Trim L-OBS Net-Trim 0.62 0.37 0.71 28.45% 4.56% 47.69% 7.4% 7.4% 4.2%
1705.07565#40
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
40
31 Amir Rosenfeld and John K Tsotsos. Incremental learning through deep adaptation. arXiv preprint arXiv: 1705.04228, 2017. 32 O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and K. Fei-Fei. Imagenet large scale visual recognition challenge, 2014. 33 A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell. Progressive neural networks. arXiv preprint arXiv: 1606.04671, 2016. 34 J. Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1):131-139, 1992. 35 K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv: 1212.0402, 2012.
1705.08045#40
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
41
of accuracy taking 10.2 hours in total on 48 Intel Xeon(R) CPU E5-1650 to compute the Hessian inverses and 86.3K iterations to retrain the pruned model. We also apply L-OBS on ResNet-50 [27]. From our best knowledge, this is the first work to perform pruning on ResNet. We perform pruning on all the layers: All layers share a same compression ratio, and we change this compression ratio in each experiments. The results are shown in Figure 2(a). As we can see, L-OBS is able to maintain ResNet’s accuracy (above 85%) when the compression ratio is larger than or equal to 45%. # 4.2 Comparison between L-OBS and Net-Trim
1705.07565#41
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
41
36 J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks, 32(0):323-332, 2012. 37 A. V. Terekhov, G. Montone, and J. K. O’Regan. Knowledge transfer in deep block-modular neural networks. In Biomimetic and Biohybrid Systems, pages 268-279, 2015. 10 [38] S. Thrun. Lifelong learning algorithms. In Learning to learn, pages 181-209. Springer, 1998. [39] E. Tzeng, J. Hoffman, T. Darrell, and K. Saenko. Simultaneous deep transfer across domains and tasks. In Proc. CVPR, pages 4068-4076, 2015. [40] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv: 1605.07146, 2016. [41] T. Zhang, B. Ghanem, S. Liu, and N. Ahuja. Robust visual tracking via structured multi-task sparse learning. JJCV, 101(2):367-383, 2013.
1705.08045#41
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
42
As our proposed L-OBS is inspired by Net-Trim, which adopts ¢;-norm to induce sparsity, we conduct comparison experiments between these two methods. In Net-Trim, networks are pruned by formulating layer-wise pruning as a optimization: minw, ||W1||1 s.t. ||o(W! Y¥'-!) — Y[p < &, where €! corresponds to €4||'¥" || in L-OBS. Due to memory limitation of Net-Trim, we only prune the middle layer of LeNet-300-100 with L-OBS and Net-Trim under the same setting. As shown in Table|2| under the same pruned error rate, CR of L-OBS outnumbers that of the Net-Trim by about six times. In addition, Net-Trim encounters explosion of memory and time on large-scale datasets and large-size parameters. Specifically, space complexity of the positive semidefinite matrix Q in quadratic constraints used in Net-Trim for optimization is O (2nm?mis . For example, Q requires about 65.7Gb for 1,000 samples on MNIST as illustrated in Figure Moreover, Net-Trim is designed for multi-layer perceptrons and not clear how to deploy it on convolutional layers. # 5 Conclusion
1705.07565#42
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.07565
43
# 5 Conclusion We have proposed a novel L-OBS pruning framework to prune parameters based on second order derivatives information of the layer-wise error function and provided a theoretical guarantee on the overall error in terms of the reconstructed errors for each layer. Our proposed L-OBS can prune considerable number of parameters with tiny drop of performance and reduce or even omit retraining. More importantly, it identifies and preserves the real important part of networks when pruning compared with previous methods, which may help to dive into nature of neural networks. # Acknowledgements This work is supported by NTU Singapore Nanyang Assistant Professorship (NAP) grant M4081532.020, Singapore MOE AcRF Tier-2 grant MOE2016-T2-2-060, and Singapore MOE AcRF Tier-1 grant 2016-T1-001-159. 9 # References [1] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015. [2] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
1705.07565#43
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
43
11 # A Decathlon scores De- Model #par.| ImNet Airc. C100 DPed DTD GTSR_ Flwr OGIt SVHN_ UCFicathlon score Scratch 10x| 250 211 103 150 90 91 0 294 261 175) 1625 Scratch+ 11x} 247 241 110 226 103 = 138 0 294 284 183) 1826 Feature extractor 1x 247 1 0 0 149 0 85 0 0 62| 544 Finetune 10x} 250) «250 250) 250 250) 250-250-250 250-250) 2500 LwF [1] 10x} 250 «260-253, 218 = 288 = 258) 296) 266) 188 = 238) 2515 BN adapt. ~1x| 250 80 162 201 208 24 93 «147 21 177] 1363 Res. adapt. 2x| 247 206 225 329 200 163 8 335 192 213] 2118 Res. adapt. decay 2x} 247 270 225 330 268 258 257 335 192 239} 2621 Res. adapt. finetune all 2x} 242 295 228 285 267 237 307 344 197 241] 2643 Res. adapt. dom-pred | 2.5x| 241 292 223 284 243 188 274 344 175 239) 2503 Res. adapt. (large) ~12x| 347) 351 «0327, 362, 296 231) 351 349255 262] 3131
1705.08045#43
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
44
[3] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [4] Luisa de Vivo, Michele Bellesi, William Marshall, Eric A Bushong, Mark H Ellisman, Giulio Tononi, and Chiara Cirelli. Ultrastructural evidence for synaptic scaling across the wake/sleep cycle. Science, 355(6324):507–510, 2017. [5] Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pages 2148–2156, 2013. [6] Nguyen N. Aghasi, A. and J. Romberg. Net-trim: A layer-wise convex pruning of deep neural networks. Journal of Machine Learning Research, 2016. [7] Russell Reed. Pruning algorithms-a survey. IEEE transactions on Neural Networks, 4(5):740– 747, 1993. [8] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
1705.07565#44
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.08045
44
Table 1: Multiple-domain networks. The figure reports the decathlon score of different models on the multiple tasks. ImageNet is used to prime the network in every case, except for the networks trained from scratch. The model size is the number of parameters w.r.t. the baseline ResNet. The fully-finetuned model, written blue, is used as a baseline to compute the decathlon score. # References [1] Z. Li and D. Hoiem. Learning without forgetting. In Proc. ECCV, pages 614-629, 2016.
1705.08045#44
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
http://arxiv.org/pdf/1705.08045
Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi
cs.CV, stat.ML
null
null
cs.CV
20170522
20171127
[]
1705.07565
45
[9] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, pages 1135–1143, 2015. [10] Yi Sun, Xiaogang Wang, and Xiaoou Tang. Sparsifying neural network connections for face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4856–4864, 2016. [11] Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. In Advances In Neural Information Processing Systems, pages 1379–1387, 2016. [12] Yann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain damage. In NIPs, volume 2, pages 598–605, 1989.
1705.07565#45
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.07565
46
[13] Babak Hassibi, David G Stork, et al. Second order derivatives for network pruning: Optimal brain surgeon. Advances in neural information processing systems, pages 164–164, 1993. [14] Thomas Kailath. Linear systems, volume 156. Prentice-Hall Englewood Cliffs, NJ, 1980. [15] Nikolas Wolfe, Aditya Sharma, Lukas Drude, and Bhiksha Raj. The incredible shrinking neural network: New perspectives on learning representations through the lens of pruning. arXiv preprint arXiv:1701.04465, 2017. [16] Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250, 2016. [17] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016.
1705.07565#46
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.07565
47
[18] Xiaojie Jin, Xiaotong Yuan, Jiashi Feng, and Shuicheng Yan. Training skinny deep neural networks with iterative hard thresholding methods. arXiv preprint arXiv:1607.05423, 2016. [19] Cheng Tai, Tong Xiao, Yi Zhang, Xiaogang Wang, et al. Convolutional neural networks with low-rank regularization. arXiv preprint arXiv:1511.06067, 2015. [20] Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 806–814, 2015. [21] R Tyrrell Rockafellar. Convex analysis. princeton landmarks in mathematics, 1997. [22] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Aistats, volume 15, page 275, 2011. [23] Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C Courville, and Yoshua Bengio. Maxout networks. ICML (3), 28:1319–1327, 2013. 10
1705.07565#47
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.07565
48
10 [24] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. [25] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. [26] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. [27] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. 11 # APPENDIX # Proof of Theorem 3.2 We prove Theorem 3.2 via induction. First, for l = 1, (8) holds as a special case of (2). Then suppose that Theorem 3.2 holds up to layer l:
1705.07565#48
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.07565
49
l-1 l fH < S(T] WOcllevoe*) + Vor! (10) h=1 k=h+1 In order to show that (10) holds for layer 7+ 1 as well, we refer to Y'+! =o(W/],, Y") as ‘layer-wise pruned output’, where the input Y" is fixed as the same as the originally well-trained network not an accumulated input Y’, and have the following theorem. Theorem 5.1. Consider layer |+1 in a pruned deep network, the difference between its accumulated pruned output, Y'*", and layer-wise pruned output, Y'+", is bounded by: # yyirt yyirt _ Vols < Vnl|O |p. (11) − ≤ Proof sketch: Consider one arbitrary element of the layer-wise pruned output yn, Gf) = ow) H+ Wi) (y} -¥})) ~ =T ~ < Hit +o(w] (y} - 55) ~ =T ~ < Hp t+lW Oj - HI, j))
1705.07565#49
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.07565
50
j)) ~ =T ~ < Hp t+lW Oj - HI, where W; is the i-th column of Wiis. The first inequality is obtained because we suppose the activation function o(-) is ReLU. Similarly, it holds for accumulated pruned output: ˜yl+1 ij ≤ ˆyl+1 ij + i (yl j − ˜yl j) . | vip) Sop? + Ie By combining the above two inequalities, we have a) 95") < be (5 — HH). and thus have the following inequality in a form of matrix, # ˆWl+1(Yl ˜Yl ˜Yl+1 ˆYl+1 F ˆΘl+1 F ˜Yl F ¥+)le ¥! − − − As é! is defined as é! = As é! is defined as é! = yall’ — Y" ||, we have ||, we have ˆYl+1 √n F ˜εl. ≤ − This completes the proof of Theorem 11. By using (2) ,(11) and the triangle inequality, we are now able to extend (10) to layer l + 1:
1705.07565#50
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.07565
51
− This completes the proof of Theorem 11. By using (2) ,(11) and the triangle inequality, we are now able to extend (10) to layer l + 1: IA 1 so. 1. ~ 1m al41 141 (41) yA (41) 141 (41) —yjyvettiy yyvol_iy —yyvoliy E val! lle Tall lp + Tall lle IA l 41 »( Il joie VEE) + V6E41, n=l \keh+1 Finally, we prove that (10) holds up for all layers, and Theorem 3.2 is a special case when l = L. # Extensive Experiments and Details # Redundancy of Networks LeNet-300-100 is a classical feed-forward network, which has three fully connected layers, with 267K learnable parameters. LeNet-5 is a convolutional neural network that has two convolutional 12 S a Random Pruning Accuracy S BR LWC ApoZ Ours S iy 0.0 0 10 20 30 40 50 60 70 80 90 100 Pruning Ratio (%) Figure 2: Test accuracy on MNIST using LeNet-300-100 when continually pruning the first layer until pruning ratio is 100%. Comparison on ability to preserve prediction between LWC, ApoZ and our proposed L-OBS.
1705.07565#51
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.07565
52
10° 4. E 10 = 10° s Z 10° 10! 0 1 2 3 4 5 6 L, (10-4) Figure 3: Distribution of sensitivity of parameters in LeNet-300-100’s first layer. More than 90% of parameters’ sensitivity scores are smaller than 0.001. layers and two fully connected layers, with 431K learnable parameters. CIFAR-Net is a revised AlexNet for CIFAR-10 containing three convolutional layers and two fully connected layers.
1705.07565#52
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]
1705.07565
53
We first validate the redundancy of networks and the ability of our proposed Layer-wise OBS to find parameters with the smallest sensitivity scores with LeNet-300-100 on MINIST. In all cases, we first get a well-trained network without dropout or regularization terms. Then, we use four kinds of pruning criteria: Random, LWC [9], ApoZW, and Layer-wise OBS to prune parameters, and evaluate performance of the whole network after performing every 100 pruning operations. Here, LWC is a magnitude-based criterion proposed in [9], which prunes parameters based on smallest absolute values. ApoZW is a revised version of ApoZ [16], which measures the importance of each parameter p=1(yl−1 Wlij in layer l via τ l . In this way, both magnitude of the parameter and its inputs are taken into consideration. Originally well-trained model LeNet-300-100 achieves 1.8% error rate on MNIST without dropout. Four pruning criteria are respectively conducted on the well-trained model’s first layer which has 235K parameters by fixing the other two layers’ parameters, and test accuracy of the whole network is recorded every 100 pruning operations without any retraining. Overall comparison results are summarized in Figure 2.
1705.07565#53
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
http://arxiv.org/pdf/1705.07565
Xin Dong, Shangyu Chen, Sinno Jialin Pan
cs.NE, cs.CV, cs.LG
null
null
cs.NE
20170522
20171109
[ { "id": "1607.03250" }, { "id": "1608.08710" }, { "id": "1511.06067" }, { "id": "1701.04465" }, { "id": "1603.04467" }, { "id": "1607.05423" } ]