doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1711.05852 | 13 | Work in XNOR-NET (Rastegari et al., 2016), binary neural networks (Courbariaux & Bengio, 2016), DoReFa (Zhou et al., 2016) and trained ternary quantization (TTQ) (Zhu et al., 2016) target training pipeline. While TTQ targets weight quantization, most works targeting activation quantization show that quantizing activations always hurt accuracy. XNOR-NET approach degrades Top-1 accuracy by 12% and DoReFa by 8% when quantizing both weights and activations to 1-bit (for AlexNet on ImageNet). Work by Gupta et al. (2015) advocates for low-precision ï¬xed-point numbers for training. They show 16-bits to be sufï¬cient for training on CIFAR10 dataset. Work by Seide et al. (2014) quantizes gradients in a distributed computing system. | 1711.05852#13 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 14 | Knowledge distillation methods: The general technique in distillation based methods involves us- ing a teacher-student strategy, where a large deep network trained for a given task teaches shallower student network(s) on the same task. The core concepts behind knowledge distillation or transfer technique have been around for a while. BuciluËa et al. (2006) show that one can compress the information in an ensemble into a single network. Ba & Caurana (2013) extend this approach to study shallow, but wide, fully connected topologies by mimicking deep neural networks. To facil- itate learning, the authors introduce the concepts of learning on logits rather than the probability distribution.
3
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy | 1711.05852#14 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 15 | Hinton et al. (2015) propose a framework to transfer knowledge by introducing the concept of tem- perature. The key idea is to divide the logits by a temperature factor before performing a Softmax function. By using a higher temperature factor the activations of incorrect classes are boosted. This then facilitates more information ï¬owing to the model parameters during back-propagation opera- tion. FitNets (Romero et al., 2014) extend this work by using intermediate hidden layer outputs as target values for training a deeper, but thinner, student model. Net2Net (Chen et al., 2015a) also uses a teacher-student network system with a function-preserving transformation approach to ini- tialize the parameters of the student network. The goal in Net2Net approach is to accelerate the training of a larger student network. Zagoruyko & Komodakis (2016) use attention as a mechanism for transferring knowledge from one network to another. In a similar theme, Yim et al. (2017) pro- pose an information metric using which a teacher DNN can transfer the distilled knowledge to other student DNNs. In N2N learning work, Ashok et al. (2017) propose a reinforcement learning based approach for compressing a teacher network into an equally capable student network. They achieve a compression factor of 10x for ResNet-34 on Cifar datasets. | 1711.05852#15 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 16 | Sparsity and hashing: Few other popular techniques for model compression are pruning (LeCun et al., 1990; Han et al., 2015a; Wen et al., 2016; Han et al., 2015b), hashing (Weinberger et al., 2009) and weight sharing (Chen et al., 2015b; Denil et al., 2013). Pruning leads to removing neurons entirely from the ï¬nal trained model making the model a sparse structure. With hashing and weight sharing schemes a hash function is used to alias several weight parameters into few hash buckets, effectively lowering the parameter memory footprint. To realize beneï¬ts of sparsity and hashing schemes during runtime, efï¬cient hardware support is required (e.g. support for irregular memory accesses (Han et al., 2016; Venkatesh et al., 2016; Parashar et al., 2017)).
# 4 KNOWLEDGE DISTILLATION
We introduce the concept of knowledge distillation in this section. BuciluËa et al. (2006), Hinton et al. (2015) and Urban et al. (2016) analyze this topic in great detail. | 1711.05852#16 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 17 | Figure 2 shows the schematic of the knowledge distillation setup. Given an input image x, a teacher DNN maps this image to predictions pT . The C class predictions are obtained by applying Softmax function on the un-normalized log probablity values z (the logits), i.e. pT = ezT j . The same image is fed to the student network and it predicts pA = ezA cost function, L, is given as:
L(x; WT , WA) = αH(y, pT ) + βH(y, pA) + γH(zT , pA) (1)
where, WT and WA are the parameters of the teacher and the student (apprentice) network, respec- tively, y is the ground truth, H(·) denotes a loss function and, α, β and γ are weighting factors to prioritize the output of a certain loss function over the other.
Teacher network âOp lo IF ; Knowledge Apprentice network distillation YN Filter bank softmax Filter bank
Figure 2: Schematic of the knowledge distillation setup. The teacher network is a high precision network and the apprentice network is a low-precision network.
4
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy | 1711.05852#17 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 18 | 4
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy
In equation 1, lowering the ï¬rst term of the cost function gives a better teacher network and lowering the second term gives a better student network. The third term is the knowledge distillation term whereby the student network attempts to mimic the knowledge in the teacher network. In Hinton et al. (2015), the logits of the teacher network are divided by a temperature factor Ï . Using a higher value for Ï produces a softer probability distribution when taking the Softmax of the logits. In our studies, we use cross-entropy function for H(·), set α = 1, β = 0.5 and γ = 0.5 and, perform the transfer learning process using the logits (inputs to the Softmax function) of the teacher network. In our experiments we study the effect of varying the depth of the teacher and the student network, and the precision of the neurons in the student network.
# 5 OUR APPROACH - APPRENTICE NETWORK | 1711.05852#18 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 19 | # 5 OUR APPROACH - APPRENTICE NETWORK
Low-precision DNNs target the storage and compute efï¬ciency aspects of the network. Model com- pression targets the same efï¬ciency parameters from the point of view of network architecture. With Apprentice we combine both these techniques to improve the network accuracy as well as the runtime efï¬ciency of DNNs. Using the teacher-student setup described in the last section, we inves- tigate three schemes using which one can obtain a low-precision model for the student network. The ï¬rst scheme (scheme-A) jointly trains both the networks - full-precision teacher and low-precision student network. The second scheme (scheme-B) trains only the low-precision student network but distills knowledge from a trained full-precision teacher network throughout the training process. The third scheme (scheme-C) starts with a trained full-precision teacher and a full-precision student network but ï¬ne-tunes the student network after lowering its precision. Before we get into the details of each of these schemes, we discuss the accuracy numbers obtained using low-precision schemes described in literature. These accuracy ï¬gures serve as the baseline for comparative analysis. | 1711.05852#19 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 20 | 5.1 TOP-1 ERROR WITH PRIOR PROPOSALS FOR LOW-PRECISION NETWORKS
We focus on sub 8-bits precision for inference deployments, speciï¬cally ternary and 4-bits precision. We found TTQ (Zhu et al., 2016) scheme achieving the state-of-the-art accuracy with ternary pre- cision for weights and full-precision (32-bits ï¬oating-point) for activations. On Imagenet-1K (Rus- sakovsky et al., 2015), TTQ achieves 33.4% Top-1 error rate with a ResNet-18 model. We imple- mented TTQ scheme for ResNet-34 and ResNet-50 models trained on Imagenet-1K and achieved 28.3% and 25.6% Top-1 error rates, respectively. This scheme is our baseline for 2-bits weight and full-precision activations. For 2-bits weight and 8-bits activation, we ï¬nd work by Mellempudi et al. (2017) to achieve the best accuracies reported in literature. For ResNet-50, Mellempudi et al. (2017) obtain 29.24% Top-1 error. We consider this work to be our baseline for 2-bits weight and 8-bits activation models. | 1711.05852#20 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 21 | For 4-bits precision, we ï¬nd WRPN scheme (Mishra et al., 2017) to report the highest accuracy. We implemented this scheme for 4-bits weight and 8-bits activations. For ResNet-34 and ResNet-50 models trained on Imagenet-1K, we achieve 29.7% and 28.4% Top-1 error rates, respectively.
5.2 SCHEME-A: JOINT TRAINING OF TEACHER-STUDENT NETWORKS
In the ï¬rst scheme that we investigate, a full-precision teacher network is jointly trained with a low- precision student network. Figure 2 shows the overall training framework. We use ResNet topology for both the teacher and student network. When using a certain depth for the student network, we pick the teacher network to have either the same or larger depth.
In BuciluËa et al. (2006) and Hinton et al. (2015), only the student network trains while distilling knowledge from the teacher network. In our case, we jointly train with the rationale that the teacher network would continuously guide the student network not only with the ï¬nal trained logits, but also on what path the teacher takes towards generating those ï¬nal higher accuracy logits. | 1711.05852#21 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 22 | We implement pre-activation version of ResNet (He et al., 2016) in TensorFlow. The training process closely follows the recipe mentioned in Torch implementation of ResNet - we use a batch size of 256 and no hyper-parameters are changed from what is mentioned in the recipe. For the teacher network, we experiment with ResNet-34, ResNet-50 and ResNet-101 as options. For the student network, we experiment with low-precision variants of ResNet-18, ResNet-34 and ResNet-50.
5
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy | 1711.05852#22 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 23 | 5
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy
For low-precision numerics, when using ternary precision we use the ternary weight network scheme (Li & Liu, 2016) where the weight tensors are quantized into {â1, 0, 1} with a per-layer scaling coefï¬cient computed based on the mean of the positive terms in the weight tensor. We use the WRPN scheme (Mishra et al., 2017) to quantize weights and activations to 4-bits or 8-bits. We do not lower the precision of the ï¬rst layer and the ï¬nal layer in the apprentice network. This is based on the observation in almost all prior works that lowering the precision of these layers degrades the accuracy dramatically. While training and during ï¬ne-tuning, the gradients are still maintained at full-precision.
Table 1: Top-1 validation set error rate (%) on ImageNet-1K for ResNet-18 stu- dent network as precision of activations (A) and weight (W) changes. The last three columns show error rate when the student ResNet-18 is paired with ResNet-34, ResNet-50 and ResNet-101. | 1711.05852#23 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 25 | Results with ResNet-18: Table 1 shows the ef- fect of lowering precision on the accuracy (Top- 1 error) of ResNet-18 with baseline (no teacher) and with ResNet-34, ResNet-50 and ResNet-101 as teachers. In the table, A denotes the precision of the activation maps (in bits) and W denotes the precision of the weights. The baseline Top-1 error for full-precision ResNet-18 is 30.4%. By low- ering the precision without using any help from a teacher network, the accuracy drops by 3.5% when using ternary and 4-bits precision (the col- umn corresponding to âRes-18 Baselineâ in the table). With distillation based technique, the ac- curacy of low-precision conï¬gurations improves signiï¬cantly. In fact, the accuracy of the full- precision ResNet-18 also improves when paired with a larger full-precision ResNet model (the row corresponding to â32A, 32Wâ in Table 1). The best full-precision accuracy was achieved with a student ResNet-18 and ResNet-101 as the teacher (improvement by 0.35% over the baseline). The gap between | 1711.05852#25 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 26 | accuracy was achieved with a student ResNet-18 and ResNet-101 as the teacher (improvement by 0.35% over the baseline). The gap between full-precision ResNet-18 and the best achieved ternary weight ResNet-18 is only 1% (improvement of 2% over previous best). With â8A, 4Wâ, we ï¬nd the accuracy of the student ResNet-18 model to beat the baseline accuracy. We hy- pothesize regularization with low-precision (and distillation) to be the reason for this. â8A, 4Wâ improving the accuracy beyond baseline ï¬gure is only seen for ResNet-18. | 1711.05852#26 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 27 | Figure 3 shows the difference in Top-1 error rate achieved by our best low-precision student networks (when trained under the guidance of a teacher network) versus not using any help from a teacher network. For this ï¬gure, the difference in Top-1 error of the best low-precision student network is calculated from the baseline full-precision network (i.e. ResNet-18 with 30.4% Top-1 error), i.e. we want to see how close a low-precision student network can come to a full-precision baseline model. We ï¬nd our low-precision network accuracies to signiï¬cantly close the gap between full-precision accuracy (and for some conï¬gurations even beat the baseline accuracy).
Hinton et al. (2015) mention improving the baseline full-precision accuracy when a student network is paired with a teacher network. They mention improving the accuracy of a small model on MNIST dataset. We show the efï¬cacy of distillation based techniques on a much bigger model (ResNet) with much larger dataset (ImageNet).
6
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy | 1711.05852#27 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 28 | 6
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy
Table 2: Top-1 validation set error rate (%) on ImageNet-1K for ResNet-34 stu- dent network as precision of activations (A) and weight (W) changes. The last three columns show error rate when the student ResNet-34 is paired with ResNet-34, ResNet-50 and ResNet-101.
ResNet-34 Baseline ResNet-34 with ResNet-34 with ResNet-50 with ResNet-101 ResNet-34 ResNet-34 32A, 32W 32A, 2W 8A, 4W 8A, 2W 26.4 28.3 29.7 30.8 26.3 27.6 27.0 28.8 26.1 27.2 26.9 28.8 26.1 27.2 26.9 28.5
Table 3: Top-1 validation set error rate (%) on ImageNet-1K for ResNet-50 student network as precision of activations (A) and weight (W) changes. The ï¬nal two columns show error rate when the student ResNet-50 is paired with ResNet-50 and ResNet-101. | 1711.05852#28 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 29 | ResNet-50 Baseline ResNet-50 ResNet-50 with ResNet-50 with ResNet-101 32A, 32W 32A, 2W 8A, 4W 8A, 2W 23.8 26.1 28.5 29.2 23.7 25.4 25.5 27.3 23.5 25.3 25.3 27.2
Difference (A) in Top-1 error for Res-34 from baseline
Difference (A) in Top-1 error for Res-50 from baseline
44% 5.5% 8A, 2W 8A, 2W 2.1% 3.3% 8A,4W 8A, 4W 0.5% 1.9% 32A, 2W 32A,2W 0.8% i 32A,32W 32A,32W 0.9% 6% 4% 2% 0% 2% 2% 2% =A from 32A, 32W without Apprentice m= A from 32A, 32W with Apprentice md from 32A, 32W without Apprentice = A from 32A, 32W with Apprentice
(a) Apprentice versus baseline accuracy for ResNet-34. (b) Apprentice versus baseline accuracy for ResNet-50. | 1711.05852#29 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 30 | (a) Apprentice versus baseline accuracy for ResNet-34. (b) Apprentice versus baseline accuracy for ResNet-50.
Figure 4: Difference in Top-1 error rate for low-precision variants of ResNet-34 and ResNet-50 with (blue bars) and without (red bars) distillation scheme. The difference is calculated from the accuracy of the baseline network (ResNet-34 for (a) and ResNet-50 for (b)) operating at full-precision. Higher % difference denotes a better network conï¬guration. | 1711.05852#30 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 31 | Results with ResNet-34 and ResNet-50: Table 2 and Table 3 show the effect of lowering precision on the accuracy of ResNet-34 and ResNet-50, respectively, with distillation based technique. With a student ResNet-34 network, we use ResNet-34, ResNet-50 and ResNet-101 as teachers. With a student ResNet-50 network, we use ResNet-50 and ResNet-101 as teachers. The Top-1 error for full-precision ResNet-34 is 26.4%. Our best 4-bits weight and 8-bits activation ResNet-34 is within 0.5% of this number (26.9% with ResNet-34 student and ResNet-50 teacher). This signiï¬cantly improves upon the previously reported error rate of 29.7%. 4-bits weight and 8-bits activation for ResNet-50 gives us a model that is within 1.5% of full-precision model accuracy (25.3% vs. 23.8%). Figure 4a and Figure 4b show the difference in Top-1 error achieved by our best low-precision ResNet-34 and ResNet-50 student networks, respectively, and compares with results obtained using methods proposed in literature. Our Apprentice scheme | 1711.05852#31 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 33 | 7
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy
Discussion: In scheme-A, we use a teacher network that is always as large or larger in number of parameters than the student network. We experimented with a ternary ResNet-34 student network which was paired with a full-precision ResNet-18. The ternary model for ResNet-34 is about 8.5x smaller in size compared to the full-precision ResNet-18 model. The ï¬nal trained accuracy of the ResNet-34 ternary model with this setup is 2.7% worse than that obtained by pairing the ternary ResNet-34 network with a ResNet-50 teacher network. This suggests that the distillation scheme works only when the teacher network is higher in accuracy than the student network (and not neces- sarily bigger in capacity). Further, the beneï¬t from using a larger teacher network saturates at some point. This can be seen by picking up a precision point, say â32A, 2Wâ and looking at the error rates along the row in Table 1, 2 and 3. | 1711.05852#33 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 34 | One concern, we had in the early stages of our investigation, with joint training of a low-precision small network and a high precision large network was the inï¬uence of the small networkâs accuracy on the accuracy of the large network. When using the joint cost function, the smaller networkâs probability scores are matched with the predictions from the teacher network. The joint cost is added as a term to the total loss function (equation 1). This led us to posit that the larger networkâs learning capability will be affected by the inherent impairment in the smaller low-precision network. Further, since the smaller student network learns form the larger teacher network, a vicious cycle might form where the student networkâs accuracy will further drop because the teacher networkâs learning capability is being impeded. However, in practice, we did not see this phenomenon occurring - in each case where the teacher network was jointly trained with a student network, the accuracy of the teacher network was always within 0.1% to 0.2% of the accuracy of the teacher network without it jointly supervising a student network. This could be because of our choice of α, β and γ values. | 1711.05852#34 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 35 | In Section 4, we mentioned about temperature, Ï , for Softmax function and hyper-parameters α = 1, β = 0.5 and γ = 0.5. Since, we train directly on the logits of the teacher network, we did not have to experiment with the appropriate value of Ï . Ï is required when training on the soft targets produced by the teacher network. Although we did not do extensive studies experimenting with training on soft targets as opposed to logits, we did ï¬nd that Ï = 1 gives us best results when training on soft targets. Hinton et al. (2015) mention that when the student network is signiï¬cantly smaller than the teacher network, small values of Ï are more effective than large values. For few of the low-precision conï¬gurations, we experimented with α = β = γ = 1, and, α = 0.9, β = 1 and γ = 0.1 or 0.3. Each of these conï¬gurations, yielded a lower performance model compared to our original choice for these parameters. | 1711.05852#35 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 36 | For the third term in equation 1, we experimented with a mean-squared error loss function and also a loss function with logits from both the student and the teacher network (i.e. H(zT , zA)). We did not ï¬nd any improvement in accuracy compared to our original choice of the cost function formulation. A thorough investigation of the behavior of the networks with other values of hyper-parameters and different loss functions is an agenda for our future work.
Overall, we ï¬nd the distillation process to be quite effective in getting us high accuracy low-precision models. All our low-precision models surpass previously reported low-precision accuracy ï¬gures. For example, TTQ scheme achieves 33.4% Top-1 error rate for ResNet-18 with 2-bits weight. Our best ResNet-18 model, using scheme-A, with 2-bits weight achieves â¼31.5% error rate, improving the model accuracy by â¼2% over TTQ. Similarly, the scheme in Mellempudi et al. (2017) achieves 29.2% Top-1 error with 2-bits weight and 8-bits activation. The best performing Apprentice network at this precision achieves 27.2% Top-1 error. For Scheme-B and Scheme-C, which we describe next, Scheme-A serves as the new baseline. | 1711.05852#36 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 37 | # 5.3 SCHEME-B: DISTILLING KNOWLEDGE FROM A TEACHER
In this scheme, we start with a trained teacher network. Referring back to Figure 2, the input image is passed to both the teacher and the student network, except that the learning with back-propagation happens only in the low precision student network which is trained from scratch. This is the scheme used by BuciluËa et al. (2006) and Hinton et al. (2015) for training their student networks. In this scheme, the ï¬rst term in equation 1 zeroes out and only the last two terms in the equation contribute toward the loss function.
With scheme-B, one can pre-compute and store the logit values for the input images on disk and access them during training the student network. This saves the forward pass computations in the
8
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy | 1711.05852#37 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 38 | 8
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy
09 09 ResNet-34 student with âscheme-A ResNet-34 student with âscheme-A o8 ResNet-50 teacher, âScheme-8 08 ResNet-50 teacher, âScheme-B 07 2W 32A 07 aw 8A 8 8 a a e e 1 1 09 09 ResNet-50 student with âScheme-A ResNet-5S0 student with âScheme-A 08 ResNet-101 teacher, âScheme-B 08 ResNet-101 teacher, âScheme-B o7 2W 32A 07 4W 8A 5 06 5 06 2 e 5 0s 5 0S a 04 2 04 e e 03 03 02 02 on on ° o OvMERSRRRSLRRBSRESHARSS OMSRRRRKSERRSSRESBRRBS Epochs Epochs
Figure 5: Top-1 error rate versus epochs of four student networks using scheme-A and scheme-B.
teacher network. Scheme-B might also help the scenario where a student network attempts to learn the âdark knowledgeâ from a teacher network that has already been trained on some private or sensitive data (in addition to the data the student network is interested in training on). | 1711.05852#38 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 39 | With scheme-A, we had the hypothesis that the student network would be inï¬uenced by not only the âdark knowledgeâ in the teacher network but also the path the teacher adopts to learn the knowledge. With scheme-B we ï¬nd, that the student network gets to similar accuracy numbers as the teacher network albeit at fewer number of epochs.
With this scheme, the training accuracies are similar to that reported in Table 1, 2 and 3. The low-precision student networks, however, learn in fewer number of epochs. Figure 5 plots the Top-1 error rates for few of the conï¬gurations from our experiment suite. In each of these plots, the student network in scheme-B converges around 80th-85th epoch compared to about 105 epochs in scheme- A. In general, we ï¬nd the student networks with scheme-B to learn in about 10%-20% fewer epochs than the student networks trained using scheme-A.
5.4 SCHEME-C: FINE-TUNING THE STUDENT MODEL | 1711.05852#39 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 40 | 5.4 SCHEME-C: FINE-TUNING THE STUDENT MODEL
Scheme-C is very similar to scheme-B, except that the student network is primed with full precision training weights before the start of the training process. At the beginning of the training process, the weights and activations are lowered and the student network is sort of ï¬ne-tuned on the dataset. Similar to scheme-B, only the ï¬nal two terms in equation 1 comprise the loss function and the low-precision student network is trained with back-propagation algorithm. Since, the network starts from a good initial point, comparatively low learning rate is used throughout the training process. There is no clear recipe for learning rates (and change of learning rate with epochs) which works across all the conï¬gurations. In general, we found training with a learning rate of 1e-3 for 10 to 15 epochs, followed by 1e-4 for another 5 to 10 epochs, followed by 1e-5 for another 5 epochs to give us the best accuracy. Some conï¬gurations run for about 40 to 50 epochs before stabilizing. For these conï¬gurations, we found training using scheme-B with warm startup (train the student network at full-precision for about 25-30 epochs before lowering the precision) to be equally good.
9 | 1711.05852#40 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 41 | 9
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy
We found the ï¬nal accuracy of the models obtained using this scheme to be (marginally) better than those obtained using scheme-A or scheme-B. Table 4 shows error rates of few conï¬gurations of low- precision student network obtained using scheme-A (or scheme-B) and scheme-C. For ResNet-50 student network, the accuracy with ternary weights is further improved by 0.6% compared to that obtained using scheme-A. Note that the performance of ternary networks obtained using scheme- A are already state-of-the-art. Hence, for ResNet-50 ternary networks, 24.7% Top-1 error rate is the new state-of-the-art. With this, ternary ResNet-50 is within 0.9% of baseline accuracy (23.8% vs. 24.7%). Similarly, with 4-bits weight and 8-bits activations, ResNet-50 model obtained using scheme-C is 0.4% better than that obtained with scheme-A (closing the gap to be within 1.3% of full-precision ResNet-50 accuracy). | 1711.05852#41 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 42 | Table 4: Top-1 ImageNet-1K validation set error rate (%) with scheme-A and scheme-C for ResNet-34 and ResNet-50 student networks with ternary and 4-bits precision.
32A, 2W 32A, 2W with scheme-A or B with scheme-C ResNet-34 student with ResNet-50 teacher ResNet-50 student with ResNet101 teacher 27.2 25.3 8A, 4W 26.9 24.7 8A, 4W with scheme-A or B with scheme-C ResNet-34 student with ResNet-50 teacher ResNet-50 student with ResNet101 teacher 26.9 25.5 26.8 25.1
Scheme-C is useful when one already has a trained network which can be ï¬ne-tuned using knowl- edge distillation schemes to produce a low-precision variant of the trained network.
5.5 DISCUSSION - TERNARY PRECISION VERSUS SPARSITY | 1711.05852#42 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 43 | As mentioned earlier, low-precision is a form of model compression. There are many works which target network sparsiï¬cation and pruning techniques to compress a model. With ternary preci- sion models, the model size reduces by a factor of 2/32 compared to full-precision models. With Apprentice, we show how one can get a performant model with ternary precision. Many works targeting network pruning and sparsiï¬cation target a full-precision model to implement their scheme. To be comparable in model size to ternary networks, a full-precision model needs to be sparsiï¬ed by 93.75%. Further, to be effective, a sparse model needs to store a key for every non-zero value de- noting the position of the value in the weight tensor. This adds storage overhead and a sparse model needs to be about 95% sparse to be at-par in memory size as a 2-bit model. Note that ternary preci- sion also has inherent sparsity (zero is a term in the ternary symbol dictionary) â we ï¬nd our ternary models to be about 50% sparse. In work by Wen et al. (2016) and Han et al. (2015b), | 1711.05852#43 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 44 | â we ï¬nd our ternary models to be about 50% sparse. In work by Wen et al. (2016) and Han et al. (2015b), sparsiï¬cation of full-precision networks is proposed but the sparsity achieved is less than 93.75%. Further, the network accuracy using techniques in both these works lead to larger degradation in accuracy com- pared to our ternary models. Overall, we believe, our ternary precision models to be state-of-the-art not only in accuracy (we better the accuracy compared to prior ternary precision models) but also when one considers the size of the model at the accuracy level achieved by low-precision or sparse networks. | 1711.05852#44 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 45 | # 6 CONCLUSIONS
We present three schemes based on knowledge distillation concept to improve the accuracy of low- precision networks. Each of the three schemes improve the accuracy of the low-precision network conï¬guration compared to prior proposals. We motivate the need for a smaller model size in low batch, real-time and resource constrained inference deployment systems. We envision the low- precision models produced by our schemes to simplify the inference deployment process on re- source constrained systems and on cloud-based deployment systems where low latency is a critical requirement.
10
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy
# REFERENCES
A. Ashok, N. Rhinehart, F. Beainy, and K. M. Kitani. N2N Learning: Network to Network Com- pression via Policy Gradient Reinforcement Learning. ArXiv e-prints, September 2017.
Lei Jimmy Ba and Rich Caurana. Do deep nets really need to be deep? CoRR, abs/1312.6184, 2013. URL http://arxiv.org/abs/1312.6184. | 1711.05852#45 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 46 | Cristian BuciluËa, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Proceed- ings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing, KDD â06, pp. 535â541, New York, NY, USA, 2006. ACM. ISBN 1-59593-339-5. doi: 10. 1145/1150402.1150464. URL http://doi.acm.org/10.1145/1150402.1150464.
Alfredo Canziani, Adam Paszke, and Eugenio Culurciello. An analysis of deep neural network models for practical applications. CoRR, abs/1605.07678, 2016. URL http://arxiv.org/ abs/1605.07678.
Tianqi Chen, Ian J. Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer. CoRR, abs/1511.05641, 2015a. URL http://arxiv.org/abs/1511.05641. | 1711.05852#46 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 47 | Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. CoRR, abs/1504.04788, 2015b. URL http://arxiv. org/abs/1504.04788.
Matthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights and activations constrained to +1 or -1. CoRR, abs/1602.02830, 2016. URL http://arxiv. org/abs/1602.02830.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. CoRR, abs/1511.00363, 2015. URL http: //arxiv.org/abs/1511.00363.
Misha Denil, Babak Shakibi, Laurent Dinh, MarcâAurelio Ranzato, and Nando de Freitas. Predicting parameters in deep learning. CoRR, abs/1306.0543, 2013. URL http://arxiv.org/abs/ 1306.0543. | 1711.05852#47 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 48 | Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. CoRR, abs/1502.02551, 2015. URL http://arxiv.org/abs/ 1502.02551.
Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2015a. URL http://arxiv.org/abs/1510.00149.
Song Han, Jeff Pool, John Tran, and William J. Dally. Learning both weights and connections for In Advances in Neural Information Processing Systems 28: Annual efï¬cient neural network. Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pp. 1135â1143, 2015b. | 1711.05852#48 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 49 | Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J. Dally. EIE: efï¬cient inference engine on compressed deep neural network. In 43rd ACM/IEEE Annual International Symposium on Computer Architecture, ISCA 2016, Seoul, South Korea, June 18-22, 2016, pp. 243â254, 2016. doi: 10.1109/ISCA.2016.30. URL https://doi.org/10. 1109/ISCA.2016.30.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. CoRR, abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. CoRR, abs/1603.05027, 2016. URL http://arxiv.org/abs/1603.05027.
G. Hinton, O. Vinyals, and J. Dean. Distilling the Knowledge in a Neural Network. ArXiv e-prints, March 2015.
11
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy | 1711.05852#49 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 50 | 11
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy
Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 25, pp. 1097â1105. Curran Associates, Inc., 2012.
Yann LeCun, John S. Denker, and Sara A. Solla. Optimal brain damage. In D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, pp. 598â605. Morgan-Kaufmann, 1990. URL http://papers.nips.cc/paper/250-optimal-brain-damage.pdf.
Fengfu Li and Bin Liu. Ternary weight networks. CoRR, abs/1605.04711, 2016. URL http: //arxiv.org/abs/1605.04711. | 1711.05852#50 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 51 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with few multiplications. CoRR, abs/1510.03009, 2015. URL http://arxiv.org/abs/1510. 03009.
N. Mellempudi, A. Kundu, D. Mudigere, D. Das, B. Kaul, and P. Dubey. Ternary Neural Networks with Fine-Grained Quantization. ArXiv e-prints, May 2017.
A. Mishra, E. Nurvitadhi, J. J Cook, and D. Marr. WRPN: Wide Reduced-Precision Networks. ArXiv e-prints, September 2017.
Daisuke Miyashita, Edward H. Lee, and Boris Murmann. Convolutional neural networks using logarithmic data representation. CoRR, abs/1603.01025, 2016. URL http://arxiv.org/ abs/1603.01025. | 1711.05852#51 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 52 | Angshuman Parashar, Minsoo Rhu, Anurag Mukkara, Antonio Puglielli, Rangharajan Venkatesan, Brucek Khailany, Joel S. Emer, Stephen W. Keckler, and William J. Dally. SCNN: an acceler- ator for compressed-sparse convolutional neural networks. CoRR, abs/1708.04485, 2017. URL http://arxiv.org/abs/1708.04485.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. CoRR, abs/1603.05279, 2016. URL http://arxiv.org/abs/1603.05279.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. CoRR, abs/1412.6550, 2014. URL http: //arxiv.org/abs/1412.6550. | 1711.05852#52 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 53 | Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015. doi: 10.1007/s11263-015-0816-y.
Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. 1-bit stochastic gradient descent and application to data-parallel distributed training of speech dnns. In Interspeech 2014, September 2014.
Wonyong Sung, Sungho Shin, and Kyuyeon Hwang. Resiliency of deep neural networks under quantization. CoRR, abs/1511.06488, 2015. URL http://arxiv.org/abs/1511.06488.
Inception-v4, inception-resnet and the impact of residual connections on learning. CoRR, abs/1602.07261, 2016. URL http: //arxiv.org/abs/1602.07261.
Torch implementation of ResNet. https://github.com/facebook/fb.resnet.torch. | 1711.05852#53 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 54 | Torch implementation of ResNet. https://github.com/facebook/fb.resnet.torch.
Yaman Umuroglu, Nicholas J. Fraser, Giulio Gambardella, Michaela Blott, Philip Heng Wai Leong, Magnus Jahre, and Kees A. Vissers. FINN: A framework for fast, scalable binarized neural network inference. CoRR, abs/1612.07119, 2016. URL http://arxiv.org/abs/1612. 07119.
12
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy
G. Urban, K. J. Geras, S. Ebrahimi Kahou, O. Aslan, S. Wang, R. Caruana, A. Mohamed, M. Phili- pose, and M. Richardson. Do Deep Convolutional Nets Really Need to be Deep and Convolu- tional? ArXiv e-prints, March 2016.
Vincent Vanhoucke, Andrew Senior, and Mark Z. Mao. Improving the speed of neural networks on cpus. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011, 2011. | 1711.05852#54 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 55 | Ganesh Venkatesh, Eriko Nurvitadhi, and Debbie Marr. Accelerating deep convolutional networks using low-precision and sparsity. CoRR, abs/1610.00324, 2016. URL http://arxiv.org/ abs/1610.00324.
Kilian Q. Weinberger, Anirban Dasgupta, Josh Attenberg, John Langford, and Alexander J. Smola. Feature hashing for large scale multitask learning. CoRR, abs/0902.2206, 2009. URL http: //arxiv.org/abs/0902.2206.
Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. CoRR, abs/1608.03665, 2016. URL http://arxiv.org/abs/ 1608.03665.
Junho Yim, Donggyu Joo, Jihoon Bae, and Junmo Kim. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. | 1711.05852#55 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 56 | Sergey Zagoruyko and Nikos Komodakis. Paying more attention to attention: Improving the per- formance of convolutional neural networks via attention transfer. CoRR, abs/1612.03928, 2016. URL http://arxiv.org/abs/1612.03928.
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. Incremental network quanti- zation: Towards lossless cnns with low-precision weights. CoRR, abs/1702.03044, 2017. URL http://arxiv.org/abs/1702.03044.
Shuchang Zhou, Zekun Ni, Xinyu Zhou, He Wen, Yuxin Wu, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. CoRR, abs/1606.06160, 2016. URL http://arxiv.org/abs/1606.06160.
Chenzhuo Zhu, Song Han, Huizi Mao, and William J. Dally. Trained ternary quantization. CoRR, abs/1612.01064, 2016. URL http://arxiv.org/abs/1612.01064.
13 | 1711.05852#56 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 57 | 13
Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy
# 7 APPENDIX: ANALYSIS WITH RESNET ON CIFAR-10 DATASET
5.1% 5.2% ResNet-110 5.6% 4% 5.5% 5.8% ResNet-56 6.1% 5.9% 6.2% 6.3% ResNet-44 6.6% 6.4% 6.6% ResNet-110 ResNet-56 ResNet-44 6.6% ResNet-32 11% ResNet-32 6.8% A% 8.0% ResNet-20 8.2% 4% ResNet-20 rz 74 4% 5% 6% 7% 8% 9% 10% Top-1 error (%) Top-1 error (%)
1 32-bits weights, 32-bits activations
O12-bits weights, 32-bits activations
02-bits weights, 8-bits activations
# 4-bits weights, 8-bits activations
# (a) Top-1 error without Apprentice scheme.
(b) Top-1 error using Apprentice scheme-A.
Figure 6: Comparison of various conï¬gurations of ResNet on CIFAR-10 with and without Apprentice scheme. | 1711.05852#57 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 58 | (b) Top-1 error using Apprentice scheme-A.
Figure 6: Comparison of various conï¬gurations of ResNet on CIFAR-10 with and without Apprentice scheme.
In addition to ImageNet dataset, we also experiment with Apprentice scheme on CIFAR-10 dataset. CIFAR-10 dataset (Krizhevsky, 2009) consists of 50K training images and 10K testing images in 10 classes. We use various depths of ResNet topology for this study. Our implemention of ResNet for CIFAR-10 closely follows the conï¬guration in He et al. (2015). The network inputs are 32Ã32 images. The ï¬rst layer is a 3Ã3 convolutional layer followed by a stack of 6n layers with 3Ã3 convolutions on feature map sizes 32, 16 and 8; with 2n layers for each feature map size. The numbers of ï¬lters are 16, 32 and 64 in each set of 2n layers. This is followed by a global average pooling, a 10-way fully connected layer and a softmax layer. Thus, in total there are 6n+2 weight layers. | 1711.05852#58 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 59 | Figure 6a shows the impact of lowering precision as the depth of ResNet varies. As the network be- comes larger in size, the impact of lowering precision is diminished. For example, with ResNet-110, full-precision Top-1 error rate is 6.19%. At the same depth, ternarizing the model also gives simi- lar accuracy (6.24%). Comparing this with ResNet-20, the gap between full-precision and ternary model (2-bits weight and 32-bits activations) is 0.8% (7.9% vs. 8.7% Top-1 error). Overall, we ï¬nd that ternarizing a model closely follows accuracy of baseline full-precision model. However, lowering both weights and activations almost always leads to large accuracy degradation. Accuracy of 2-bits weight and 8-bits activation network is 0.8%-1.6% worse than full-precision model. Using Apprentice scheme this gap is considerably lowered. | 1711.05852#59 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05852 | 60 | Figure 6b shows the impact of lowering precision when a low-precision (student) network is paired with a full-precision (teacher) network. For this analysis we use scheme-A where we jointly train both the teacher and student network. The mix of ResNet depths we used for this study are ResNet- 20, 32, 44, 56, 110 and 182. ResNet-20 student network was paired with deeper ResNets from this mix, i.e. ResNet-32, 44, 56, 110 and 182 (as ï¬ve separate experiments). Similarly, ResNet-44 student network was paired with deeper ResNet-56 and 110 (as two different set of experiments).
14
# Apprentice: Using KD Techniques to Improve Low-Precision Network Accuracy
ResNet-110 student network used ResNet-182 as its teacher network. For a particular ResNet depth, the ï¬gure plots the minimum error rate across each of the experiments.
We ï¬nd Apprentice scheme to improve the baseline full-precision accuracy. The scheme also helps close the gap between the new improved baseline accuracy and the accuracy when lowering the precision of the weights and activations. The gap between 2-bits weight and 8-bits activation network is now 0.4%-0.8% worse than full-precision model.
15 | 1711.05852#60 | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer
vision workloads like image classification and object detection. The performant
systems, however, typically involve big models with numerous parameters. Once
trained, a challenging aspect for such top performing models is deployment on
resource constrained inference systems - the models (often deep networks or
wide networks or both) are compute and memory intensive. Low-precision numerics
and model compression using knowledge distillation are popular techniques to
lower both the compute requirements and memory footprint of these deployed
models. In this paper, we study the combination of these two techniques and
show that the performance of low-precision networks can be significantly
improved by using knowledge distillation techniques. Our approach, Apprentice,
achieves state-of-the-art accuracies using ternary precision and 4-bit
precision for variants of ResNet architecture on ImageNet dataset. We present
three schemes using which one can apply knowledge distillation techniques to
various stages of the train-and-deploy pipeline. | http://arxiv.org/pdf/1711.05852 | Asit Mishra, Debbie Marr | cs.LG, cs.CV, cs.NE | null | null | cs.LG | 20171115 | 20171115 | [] |
1711.05101 | 1 | # ABSTRACT
L2 regularization and weight decay regularization are equivalent for standard stochastic gradient descent (when rescaled by the learning rate), but as we demon- strate this is not the case for adaptive gradient algorithms, such as Adam. While common implementations of these algorithms employ L2 regularization (often calling it âweight decayâ in what may be misleading due to the inequivalence we expose), we propose a simple modiï¬cation to recover the original formulation of weight decay regularization by decoupling the weight decay from the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our pro- posed modiï¬cation (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam and (ii) substan- tially improves Adamâs generalization performance, allowing it to compete with SGD with momentum on image classiï¬cation datasets (on which it was previously typically outperformed by the latter). Our proposed decoupled weight decay has already been adopted by many researchers, and the community has implemented it in TensorFlow and PyTorch; the complete source code for our experiments is available at https://github.com/loshchil/AdamW-and-SGDW
1
# 1 INTRODUCTION | 1711.05101#1 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 2 | Adaptive gradient methods, such as AdaGrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), Adam (Kingma & Ba, 2014) and most recently AMSGrad (Reddi et al., 2018) have become a default method of choice for training feed-forward and recurrent neural networks (Xu et al., 2015; Radford et al., 2015). Nevertheless, state-of-the-art results for popular image classiï¬cation datasets, such as CIFAR-10 and CIFAR-100 Krizhevsky (2009), are still obtained by applying SGD with momentum (Gastaldi, 2017; Cubuk et al., 2018). Furthermore, Wilson et al. (2017) suggested that adaptive gradient methods do not generalize as well as SGD with momentum when tested on a diverse set of deep learning tasks, such as image classiï¬cation, character-level language modeling and constituency parsing. Different hypotheses about the origins of this worse generalization have been investigated, such as the presence of sharp local minima (Keskar et al., 2016; Dinh et al., 2017) and inherent problems of adaptive gradient methods (Wilson et al., 2017). In this paper, we | 1711.05101#2 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 3 | (Keskar et al., 2016; Dinh et al., 2017) and inherent problems of adaptive gradient methods (Wilson et al., 2017). In this paper, we investigate whether it is better to use L2 regularization or weight decay regularization to train deep neural networks with SGD and Adam. We show that a major factor of the poor generalization of the most popular adaptive gradient method, Adam, is due to the fact that L2 regularization is not nearly as effective for it as for SGD. Speciï¬cally, our analysis of Adam leads to the following observations: | 1711.05101#3 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 4 | L2 regularization and weight decay are not identical. The two techniques can be made equiv- alent for SGD by a reparameterization of the weight decay factor based on the learning rate; however, as is often overlooked, this is not the case for Adam. In particular, when combined with adaptive gradients, L2 regularization leads to weights with large historic parameter and/or gradient amplitudes being regularized less than they would be when us- ing weight decay.
L2 regularization is not effective in Adam. One possible explanation why Adam and other adaptive gradient methods might be outperformed by SGD with momentum is that common deep learning libraries only implement L2 regularization, not the original weight decay. Therefore, on tasks/datasets where the use of L2 regularization is beneï¬cial for SGD (e.g.,
1
Published as a conference paper at ICLR 2019
on many popular image classiï¬cation datasets), Adam leads to worse results than SGD with momentum (for which L2 regularization behaves as expected).
Weight decay is equally effective in both SGD and Adam. For SGD, it is equivalent to L2 regularization, while for Adam it is not.
Optimal weight decay depends on the total number of batch passes/weight updates. Our empirical analysis of SGD and Adam suggests that the larger the runtime/number of batch passes to be performed, the smaller the optimal weight decay. | 1711.05101#4 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 5 | Adam can substantially beneï¬t from a scheduled learning rate multiplier. The fact that Adam is an adaptive gradient algorithm and as such adapts the learning rate for each parameter does not rule out the possibility to substantially improve its performance by using a global learning rate multiplier, scheduled, e.g., by cosine annealing.
The main contribution of this paper is to improve regularization in Adam by decoupling the weight decay from the gradient-based update. In a comprehensive analysis, we show that Adam generalizes substantially better with decoupled weight decay than with L2 regularization, achieving 15% relative improvement in test error (see Figures 2 and 3); this holds true for various image recognition datasets (CIFAR-10 and ImageNet32x32), training budgets (ranging from 100 to 1800 epochs), and learning rate schedules (ï¬xed, drop-step, and cosine annealing; see Figure 1). We also demonstrate that our decoupled weight decay renders the optimal settings of the learning rate and the weight decay factor much more independent, thereby easing hyperparameter optimization (see Figure 2). | 1711.05101#5 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 6 | The main motivation of this paper is to improve Adam to make it competitive w.r.t. SGD with momentum even for those problems where it did not use to be competitive. We hope that as a result, practitioners do not need to switch between Adam and SGD anymore, which in turn should reduce the common issue of selecting dataset/task-speciï¬c training algorithms and their hyperparameters.
# 2 DECOUPLING THE WEIGHT DECAY FROM THE GRADIENT-BASED UPDATE
In the weight decay described by Hanson & Pratt (1988), the weights θ decay exponentially as
θt+1 = (1 â λ)θt â αâft(θt), (1)
where 2 defines the rate of the weight decay per step and V f;(0,) is the t-th batch gradient to be multiplied by a learning rate a. For standard SGD, it is equivalent to standard Ly regularization: Proposition 1 (Weight decay = L2 reg for standard SGD). Standard SGD with base learning rate a executes the same steps on batch loss functions f,(@) with weight decay X (defined in Equation{I) as it executes without weight decay on f;" (0) = f,(@) + x \|9| > with X' = A. | 1711.05101#6 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 7 | The proofs of this well-known fact, as well as our other propositions, are given in Appendix A.
Due to this equivalence, L2 regularization is very frequently referred to as weight decay, including in popular deep learning libraries. However, as we will demonstrate later in this section, this equiva- lence does not hold for adaptive gradient methods. One fact that is often overlooked already for the simple case of SGD is that in order for the equivalence to hold, the Lz regularizer \â has to be set to A, ie., if there is an overall best weight decay value A, the best value of â is tightly coupled with the learning rate a. In order to decouple the effects of these two hyperparameters, we advocate to decouple the weight decay step as proposed by[Hanson & Pratt|(1988) (Equation[I}. | 1711.05101#7 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 8 | Looking ï¬rst at the case of SGD, we propose to decay the weights simultaneously with the update of θt based on gradient information in Line 9 of Algorithm 1. This yields our proposed variant of SGD with momentum using decoupled weight decay (SGDW). This simple modiï¬cation explicitly decouples λ and α (although some problem-dependent implicit coupling may of course remain as for any two hyperparameters). In order to account for a possible scheduling of both α and λ, we introduce a scaling factor ηt delivered by a user-deï¬ned procedure SetScheduleM ultiplier(t).
Now, letâs turn to adaptive gradient algorithms like the popular optimizer Adam Kingma & Ba (2014), which scale gradients by their historic magnitudes. Intuitively, when Adam is run on a loss function f plus L2 regularization, weights that tend to have large gradients in f do not get regularized as much as they would with decoupled weight decay, since the gradient of the regularizer gets scaled
2
Published as a conference paper at ICLR 2019
Algorithm 1 SGD with L2 regularization and SGD with decoupled weight decay (SGDW) , both with momentum 1: given initial learning rate α â IR, momentum factor β1 â IR, weight decay/L2 regularization factor λ â IR | 1711.05101#8 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 9 | 2: initialize time step t â 0, parameter vector θt=0 â IRn, ï¬rst moment vector mt=0 â 0, schedule
multiplier ηt=0 â IR
3: repeat 4: 5: âft(θtâ1) â SelectBatch(θtâ1) gt â âft(θtâ1) +λθtâ1 6: ηt â SetScheduleMultiplier(t) 7: mt â β1mtâ1 + ηtαgt 8: θt â θtâ1 â mt âηtλθtâ1 9: 10: until stopping criterion is met 11: return optimized parameters θt
> select batch and return the corresponding gradient
> can be fixed, decay, be used for warm restarts
Algorithm 2 Adam with L2 regularization and Adam with decoupled weight decay (AdamW)
1: given a = 0.001, 61 = 0.9, B2 = 0.999,e = 10-5, XE R 2: initialize time step t <~ 0, parameter vector 0:0 ⬠RRâ, first moment vector m;=o < 0, second moment
vector vt=0 â 0, schedule multiplier ηt=0 â IR | 1711.05101#9 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 10 | vector vt=0 â 0, schedule multiplier ηt=0 â IR
3: repeat 4: 5: âft(θtâ1) â SelectBatch(θtâ1) gt â âft(θtâ1) +λθtâ1 6: mt â β1mtâ1 + (1 â β1)gt 7: vt â β2vtâ1 + (1 â β2)g2 8: t Ëmt â mt/(1 â βt 1) 9: Ëvt â vt/(1 â βt 2) 10: ηt â SetScheduleMultiplier(t) 11: θt â θtâ1 â ηt
â
12: 13: until stopping criterion is met 14: return optimized parameters θt
END )
> select batch and return the corresponding gradient
> here and below all operations are element-wise
> ; is taken to the power of t > Bo is taken to the power of t > can be fixed, decay, or also be used for warm restarts
along with the gradient of f . This leads to an inequivalence of L2 and decoupled weight decay regularization for adaptive gradient algorithms: | 1711.05101#10 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 11 | along with the gradient of f . This leads to an inequivalence of L2 and decoupled weight decay regularization for adaptive gradient algorithms:
Proposition 2 (Weight decay # L2 reg for adaptive gradients). Let O denote an optimizer that has iterates 0,41 <â 0, â aM,V f,(@) when run on batch loss function f,(@) without weight decay, and 0444 < (1 â A)@; â aM,V f;(O,) when run on f,(8) with weight decay, respectively, with M, # kI (where k ⬠R). Then, for O there exists no Lz coefficient \â such that running O on batch loss fi" (0) = fil(O@)+ x \|0\|3 without weight decay is equivalent to running O on f,(0) with decay AER*.
We decouple weight decay and loss-based gradient updates in Adam as shown in line 12 of Algo- rithm 2; this gives rise to our variant of Adam with decoupled weight decay (AdamW). | 1711.05101#11 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 12 | Having shown that L2 regularization and weight decay regularization differ for adaptive gradient algorithms raises the question of how they differ and how to interpret their effects. Their equivalence for standard SGD remains very helpful for intuition: both mechanisms push weights closer to zero, at the same rate. However, for adaptive gradient algorithms they differ: with L2 regularization, the sums of the gradient of the loss function and the gradient of the regularizer (i.e., the L2 norm of the weights) are adapted, whereas with decoupled weight decay, only the gradients of the loss function are adapted (with the weight decay step separated from the adaptive gradient mechanism). With L2 regularization both types of gradients are normalized by their typical (summed) magnitudes, and therefore weights x with large typical gradient magnitude s are regularized by a smaller relative amount than other weights. In contrast, decoupled weight decay regularizes all weights with the same rate λ, effectively regularizing weights x with large s more than standard L2 regularization
3
Published as a conference paper at ICLR 2019 | 1711.05101#12 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 13 | 3
Published as a conference paper at ICLR 2019
does. We demonstrate this formally for a simple special case of adaptive gradient algorithm with a ï¬xed preconditioner: Proposition 3 (Weight decay = scale-adjusted L2 reg for adaptive gradient algorithm with ï¬xed preconditioner). Let O denote an algorithm with the same characteristics as in Proposition 2, and using a ï¬xed preconditioner matrix Mt = diag(s)â1 (with si > 0 for all i). Then, O with base learning rate α executes the same steps on batch loss functions ft(θ) with weight decay λ as it executes without weight decay on the scale-adjusted regularized batch loss
e nN 2 (0) = ful) + 0.0 vs. 2)
â
# where © and ,/denote element-wise multiplication and square root, respectively, and \â = 4. | 1711.05101#13 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 14 | â
# where © and ,/denote element-wise multiplication and square root, respectively, and \â = 4.
We note that this proposition does not directly apply to practical adaptive gradient algorithms, since these change the preconditioner matrix at every step. Nevertheless, it can still provide intuition about the equivalent loss function being optimized in each step: parameters θi with a large inverse pre- conditioner si (which in practice would be caused by historically large gradients in dimension i) are regularized relatively more than they would be with L2 regularization; speciï¬cally, the regularization is proportional to
JUSTIFICATION OF DECOUPLED WEIGHT DECAY VIA A VIEW OF ADAPTIVE GRADIENT METHODS AS BAYESIAN FILTERING | 1711.05101#14 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 15 | JUSTIFICATION OF DECOUPLED WEIGHT DECAY VIA A VIEW OF ADAPTIVE GRADIENT METHODS AS BAYESIAN FILTERING
We now discuss a justiï¬cation of decoupled weight decay in the framework of Bayesian ï¬ltering for a uniï¬ed theory of adaptive gradient algorithms due to Aitchison (2018). After we posted a prelim- inary version of our current paper on arXiv, Aitchison noted that his theory âgives us a theoretical framework in which we can understand the superiority of this weight decay over L2 regularization, because it is weight decay, rather than L2 regularization that emerges through the straightforward ap- plication of Bayesian ï¬ltering.â(Aitchison, 2018). While full credit for this theory goes to Aitchison, we summarize it here to shed some light on why weight decay may be favored over L2 regulariza- tion. | 1711.05101#15 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 16 | Aitchison (2018) views stochastic optimization of n parameters θ1, . . . , θn as a Bayesian ï¬ltering problem with the goal of inferring a distribution over the optimal values of each of the parameters θi given the current values of the other parameters θâi(t) at time step t. When the other parameters do not change this is an optimization problem, but when they do change it becomes one of âtrackingâ the optimizer using Bayesian ï¬ltering as follows. One is given a probability distribution P (θt | y1:t) of the optimizer at time step t that takes into account the data y1:t from the ï¬rst t mini batches, a state transition prior P (θt+1 | θt) reï¬ecting a (small) data-independent change in this distribution from one step to the next, and a likelihood P (yt+1 | θt+1) derived from the mini batch at step t + 1. The posterior distribution P (θt+1 | y1:t+1) of the optimizer at time step t + 1 can then be computed (as usual in Bayesian ï¬ltering) by | 1711.05101#16 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 17 | | y1:t+1) of the optimizer at time step t + 1 can then be computed (as usual in Bayesian ï¬ltering) by marginalizing over θt to obtain the one- step ahead predictions P (θt+1 | y1:t) and then applying Bayesâ rule to incorporate the likelihood P (yt+1 | θt+1). Aitchison (2018) assumes a Gaussian state transition distribution P (θt+1 | θt) and an approximate conjugate likelihood P (yt+1 | θt+1), leading to the following closed-form update of the ï¬ltering distributionâs mean: | 1711.05101#17 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 18 | µpost = µprior + Σpost à g, (3)
where g is the gradient of the log likelihood of the mini batch at time t. This result implies a precon- ditioner of the gradients that is given by the posterior uncertainty Σpost of the ï¬ltering distribution: updates are larger for parameters we are more uncertain about and smaller for parameters we are more certain about. Aitchison (2018) goes on to show that popular adaptive gradient methods, such as Adam and RMSprop, as well as Kronecker-factorized methods are special cases of this frame- work.
Decoupled weight decay very naturally ï¬ts into this uniï¬ed framework as part of the state-transition distribution: Aitchison (2018) assumes a slow change of the optimizer according to the following Gaussian:
P (θt+1 | θt) = N ((I â A)θt, Q), (4)
4
Published as a conference paper at ICLR 2019 | 1711.05101#18 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 19 | âAdam with cosine annealing âAdam without cosine annealing 1 âAdam with step-drop learning rate decay 1 7 7 1 7 3 3 3 > es S es S jes & & & 2 o 2 2 is 2 ss 55 2 ine 55 Fj Fj Fj 2 a io] 2 ; B B e 4 4 $ vz $ 286 g £ wwe 8% S wre seg i = = = a M0432 18 V8 it . L2 regulanization factor tobe multipled by 0.001 2 regularization factor to be multpied by 0.001 L2 regularization factor tobe mutipied by 0.001 âAdamW without cosine annealing âAdamW with step-drop learning rate decay âAdamW with cosine annealing 7 1 7 7 3 3 3 > es S es S jes & & & a 6S 6S 6 z 5s eB 5s eB 55 Fj Fj Fj 2 a io] a io] ; B 2 ne 2 me 4 4 4 $ vz $ 288 $ 286 8 vst 8% S wre 8% S wre se = = = 024] 3 no2a 3 a02a 3 â0 vs 118 18 at Weight decay o be multipied by 0.001 â0 vs 118 18 at Weight decay o be mutipied | 1711.05101#19 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 21 | âAdam without cosine annealing 1 7 3 > es & 2 o 2 ss Fj 2 a B 4 $ vz £ wwe 8% = a L2 regulanization factor tobe multipled by 0.001
âAdam with step-drop learning rate decay 1 7 3 S es & 2 55 Fj io] B $ 286 S wre = 2 regularization factor to be multpied by 0.001
âAdam with cosine annealing 1 7 3 S jes & 2 is 2 ine 55 Fj 2 ; e 4 g i = M0432 18 V8 it . L2 regularization factor tobe mutipied by 0.001
âAdamW without cosine annealing 7 3 > es & a z 5s Fj 2 a B 4 $ vz 8 vst 8% = 024] 3 â0 vs 118 18 at Weight decay o be multipied by 0.001
âAdamW with step-drop learning rate decay 1 7 3 S es & eB 5s Fj io] a 2 ne 4 $ 288 S wre 8% = no2a 3 â0 vs 118 18 at Weight decay o be mutipied by 0.001 | 1711.05101#21 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 22 | âAdamW with cosine annealing 7 3 S jes & 6 eB 55 Fj io] ; 2 me 4 $ 286 S wre se = a02a 3 â0 vs 18 8a at Weight decay o be multiplied by 0.001
Figure 1: Adam performs better with decoupled weight decay (bottom row, AdamW) than with L2 regularization (top row, Adam). We show the ï¬nal test error of a 26 2x64d ResNet on CIFAR-10 after 100 epochs of training with ï¬xed learning rate (left column), step-drop learning rate (with drops at epoch indexes 30, 60 and 80, middle column) and cosine annealing (right column). AdamW leads to a more separable hyperparameter search space, especially when a learning rate schedule, such as step-drop and cosine annealing is applied. Cosine annealing yields clearly superior results. | 1711.05101#22 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 23 | where Q is the covariance of Gaussian perturbations of the weights, and A is a regularizer to avoid values growing unboundedly over time. When instantiated as A = λ à I, this regularizer A plays exactly the role of decoupled weight decay as described in Equation 1, since this leads to multiplying the current mean estimate θt by (1 â λ) at each step. Notably, this regularization is also directly applied to the prior and does not depend on the uncertainty in each of the parameters (which would be required for L2 regularization).
# 4 EXPERIMENTAL VALIDATION | 1711.05101#23 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 24 | # 4 EXPERIMENTAL VALIDATION
We now evaluate the performance of decoupled weight decay under various training budgets and learning rate schedules. Our experimental setup follows that of Gastaldi (2017), who pro- posed, in addition to L2 regularization, to apply the new Shake-Shake regularization to a 3-branch residual DNN that allowed to achieve new state-of-the-art results of 2.86% on the CIFAR-10 dataset (Krizhevsky, 2009). We used the same model/source code based on fb.resnet.torch 1. We always used a batch size of 128 and applied the regular data augmentation procedure for the CI- the network has a depth of 26, 2 FAR datasets. The base networks are a 26 2x64d ResNet (i.e. residual branches and the ï¬rst residual block has a width of 64) and a 26 2x96d ResNet with 11.6M and 25.6M parameters, respectively. For a detailed description of the network and the Shake-Shake method, we refer the interested reader to Gastaldi (2017). We also perform experiments on the Im- ageNet32x32 dataset (Chrabaszcz et al., 2017), a downsampled version of the original ImageNet dataset with 1.2 million 32Ã32 pixels images. | 1711.05101#24 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 26 | # 1https://github.com/xgastaldi/shake-shake
5
Published as a conference paper at ICLR 2019
sGDW 7 7 les les le le a6) Ey a6) Ey 1139] 5 1199] 5 val las val las nae] nae] 11286 11286 W512] 4 W512] 4 3 sro2a| 3 N04 82 118 8 et ova 16 8 4121 L2 regularization factor to be multiplied by 0.001 Weight decay to be multiplied by 0.001 Initial learning rate Initial learning rate âAdam Adamw- 7 7 les les le le 18 rc) a6 5s sN6 | 55 +139 5 1139 5 Lal las Lal las ana nae ls ls 11286 11286 W512 EY W512 EY 3 âroa! 3 My 2 18 V8 ie 12 1 2 4 8 6 0 v2 v6 8 4 1 2 4 88 L2 regularization factor to be multiplied by 0.001 Weight decay to be multiplied by 0.001 Initial learning rate to be multiplied by 0.1 Initial learning rate to be multiplied by 0.1
7 les le a6) Ey 1139] 5 val las nae] 11286 W512] 4 3 N04 82 118 8 et L2 regularization factor to be multiplied by 0.001 Initial learning rate | 1711.05101#26 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 27 | sGDW 7 les le a6) Ey 1199] 5 val las nae] 11286 W512] 4 sro2a| 3 ova 16 8 4121 Weight decay to be multiplied by 0.001 Initial learning rate
âAdam 7 les le 18 a6 5s +139 5 Lal las ana ls 11286 W512 EY 3 My 2 18 V8 ie 12 1 2 4 8 6 L2 regularization factor to be multiplied by 0.001 Initial learning rate to be multiplied by 0.1
Adamw- 7 les le rc) sN6 | 55 1139 5 Lal las nae ls 11286 W512 EY âroa! 3 0 v2 v6 8 4 1 2 4 88 Weight decay to be multiplied by 0.001 Initial learning rate to be multiplied by 0.1
Figure 2: The Top-1 test error of a 26 2x64d ResNet on CIFAR-10 measured after 100 epochs. The proposed SGDW and AdamW (right column) have a more separable hyperparameter space. | 1711.05101#27 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 28 | schedule, and a cosine annealing schedule (Loshchilov & Hutter, 2016). Since Adam already adapts its parameterwise learning rates it is not as common to use a learning rate multiplier schedule with it as it is with SGD, but as our results show such schedules can substantially improve Adamâs per- formance, and we advocate not to overlook their use for adaptive gradient algorithms.
For each learning rate schedule and weight decay variant, we trained a 2x64d ResNet for 100 epochs, using different settings of the initial learning rate α and the weight decay factor λ. Figure 1 shows that decoupled weight decay outperforms L2 regularization for all learning rate schedules, with larger differences for better learning rate schedules. We also note that decoupled weight decay leads to a more separable hyperparameter search space, especially when a learning rate schedule, such as step-drop and cosine annealing is applied. The ï¬gure also shows that cosine annealing clearly outperforms the other learning rate schedules; we thus used cosine annealing for the remainder of the experiments.
4.2 DECOUPLING THE WEIGHT DECAY AND INITIAL LEARNING RATE PARAMETERS | 1711.05101#28 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 29 | In order to verify our hypothesis about the coupling of α and λ, in Figure 2 we compare the perfor- mance of L2 regularization vs. decoupled weight decay in SGD (SGD vs. SGDW, top row) and in Adam (Adam vs. AdamW, bottom row). In SGD (Figure 2, top left), L2 regularization is not decou- pled from the learning rate (the common way as described in Algorithm 1), and the ï¬gure clearly shows that the basin of best hyperparameter settings (depicted by color and top-10 hyperparameter settings by black circles) is not aligned with the x-axis or y-axis but lies on the diagonal. This sug- gests that the two hyperparameters are interdependent and need to be changed simultaneously, while only changing one of them might substantially worsen results. Consider, e.g., the setting at the top left black circle (α = 1/2, λ = 1/8 â 0.001); only changing either α or λ by itself would worsen results, while changing both of them could still yield clear improvements. We note that this coupling of initial learning rate and L2 regularization factor might have contributed to SGDâs reputation of being very sensitive to its hyperparameter settings. | 1711.05101#29 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 30 | In contrast, the results for SGD with decoupled weight decay (SGDW) in Figure 2 (top right) show that weight decay and initial learning rate are decoupled. The proposed approach renders the two hyperparameters more separable: even if the learning rate is not well tuned yet (e.g., consider the value of 1/1024 in Figure 2, top right), leaving it ï¬xed and only optimizing the weight decay factor
6
Published as a conference paper at ICLR 2019
55 g* SS 5 gas Fd o Fo4 35 AdamW iy 200 400 600 800 1000 1200 1400 1600 1800 Epochs
> 55 2 g 3 g* SS 8 5 b gas g Fd =} o D Fo4 £ 35 © || Adam 10 âAdamW AdamW 0 200 400 600 800 1000 1200 1400 1600 1800 iy 200 400 600 800 1000 1200 1400 1600 1800 Epochs Epochs 65 5 âSâ Adam A 6;| =~ Adamw ie 55) / 45) s : 2D = Z 5 45] P| 3 2 = AD fe a oie 35 Se ed Pa : 7 AH~p 35 eo 5 4 3 10" 10 10 10° 10 3 Weight decay for Adam 10 10° 107 10" Normalized weight decay times 10 for AdamW Training loss (cross-entropy) | 1711.05101#30 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 31 | > 2 g 3 8 b g =} D £ © || Adam 10 âAdamW 0 200 400 600 800 1000 1200 1400 1600 1800 Epochs
5 45) s = 5 P| 3 2 7 3 10 10° 107 10" Training loss (cross-entropy)
65 âSâ Adam A 6;| =~ Adamw ie 55) / : 2D Z 45] = AD fe a oie 35 Se ed Pa : AH~p 35 eo 5 4 3 10" 10 10 10° 10 Weight decay for Adam Normalized weight decay times 10 for AdamW
Figure 3: Learning curves (top row) and generalization results (bottom row) obtained by a 26 2x96d ResNet trained with Adam and AdamW on CIFAR-10. See text for details. SuppFigure 4 in the Appendix shows the same qualitative results for ImageNet32x32.
would yield a good value (of 1/4*0.001). This is not the case for SGD with L2 regularization (see Figure 2, top left). | 1711.05101#31 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 32 | would yield a good value (of 1/4*0.001). This is not the case for SGD with L2 regularization (see Figure 2, top left).
The results for Adam with L2 regularization are given in Figure 2 (bottom left). Adamâs best hy- perparameter settings performed clearly worse than SGDâs best ones (compare Figure 2, top left). While both methods used L2 regularization, Adam did not beneï¬t from it at all: its best results ob- tained for non-zero L2 regularization factors were comparable to the best ones obtained without the L2 regularization, i.e., when λ = 0. Similarly to the original SGD, the shape of the hyperparameter landscape suggests that the two hyperparameters are coupled.
In contrast, the results for our new variant of Adam with decoupled weight decay (AdamW) in Figure 2 (bottom right) show that AdamW largely decouples weight decay and learning rate. The results for the best hyperparameter settings were substantially better than the best ones of Adam with L2 regularization and rivaled those of SGD and SGDW. | 1711.05101#32 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 33 | In summary, the results in Figure 2 support our hypothesis that the weight decay and learning rate hyperparameters can be decoupled, and that this in turn simpliï¬es the problem of hyperparameter tuning in SGD and improves Adamâs performance to be competitive w.r.t. SGD with momentum.
4.3 BETTER GENERALIZATION OF ADAMW
While the previous experiment suggested that the basin of optimal hyperparameters of AdamW is broader and deeper than the one of Adam, we next investigated the results for much longer runs of 1800 epochs to compare the generalization capabilities of AdamW and Adam. | 1711.05101#33 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 34 | We ï¬xed the initial learning rate to 0.001 which represents both the default learning rate for Adam and the one which showed reasonably good results in our experiments. Figure 3 shows the results for 12 settings of the L2 regularization of Adam and 7 settings of the normalized weight decay of AdamW (the normalized weight decay represents a rescaling formally deï¬ned in Appendix B.1; it amounts to a multiplicative factor which depends on the number of batch passes). Interestingly, while the dynamics of the learning curves of Adam and AdamW often coincided for the ï¬rst half of the training run, AdamW often led to lower training loss and test errors (see Figure 3 top left and top right, respectively). Importantly, the use of L2 weight decay in Adam did not yield as good
7
Published as a conference paper at ICLR 2019
i Adam â*â Adam âB- Adamw Adamw 7 â4- sepw SGDW TS eae AdamWR SGDWR â4&-â sepwr o Test error (%) a iS Test error (%) oe 0 200 400 600 800 1000 1200 1400 1600 1800 oO 50 100 150 Epochs Epochs
i Adam âB- Adamw 7 â4- sepw TS eae SGDWR o Test error (%) a iS oe 0 200 400 600 800 1000 1200 1400 1600 1800 Epochs | 1711.05101#34 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 35 | â*â Adam Adamw SGDW AdamWR â4&-â sepwr Test error (%) oO 50 100 150 Epochs
Figure 4: Top-1 test error on CIFAR-10 (left) and Top-5 test error on ImageNet32x32 (right). For a better resolution and with training loss curves, see SuppFigure 5 and SuppFigure 6 in the supplementary material.
results as decoupled weight decay in AdamW (see also Figure 3, bottom left). Next, we investigated whether AdamWâs better results were only due to better convergence or due to better generalization. The results in Figure 3 (bottom right) for the best settings of Adam and AdamW suggest that AdamW did not only yield better training loss but also yielded better generalization performance for similar training loss values. The results on ImageNet32x32 (see SuppFigure 4 in the Appendix) yield the same conclusion of substantially improved generalization performance.
4.4 ADAMWR WITH WARM RESTARTS FOR BETTER ANYTIME PERFORMANCE | 1711.05101#35 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 36 | 4.4 ADAMWR WITH WARM RESTARTS FOR BETTER ANYTIME PERFORMANCE
In order to improve the anytime performance of SGDW and AdamW we extended them with the warm restarts we introduced in Loshchilov & Hutter (2016), to obtain SGDWR and AdamWR, re- spectively (see Section B.2 in the Appendix). As Figure 4 shows, AdamWR greatly sped up AdamW on CIFAR-10 and ImageNet32x32, up to a factor of 10 (see the results at the ï¬rst restart). For the default learning rate of 0.001, AdamW achieved 15% relative improvement in test error compared to Adam both on CIFAR-10 (also see SuppFigure 5) and ImageNet32x32 (also see SuppFigure 6).
AdamWR achieved the same improved results but with a much better anytime performance. These improvements closed most of the gap between Adam and SGDWR on CIFAR-10 and yielded com- parable performance on ImageNet32x32.
4.5 USE OF ADAMW ON OTHER DATASETS AND ARCHITECTURES | 1711.05101#36 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 37 | Several other research groups have already successfully applied AdamW in citable works. For exam- ple, Wang et al. (2018) used AdamW to train a novel architecture for face detection on the standard WIDER FACE dataset (Yang et al., 2016), obtaining almost 10x faster predictions than the previous state of the art algorithms while achieving comparable performance. V¨olker et al. (2018) employed AdamW with cosine annealing to train convolutional neural networks to classify and characterize error-related brain signals measured from intracranial electroencephalography (EEG) recordings. While their paper does not provide a comparison to Adam, they kindly provided us with a direct comparison of the two on their best-performing problem-speciï¬c network architecture Deep4Net and a variant of ResNet. AdamW with the same hyperparameter setting as Adam yielded higher test set accuracy on Deep4Net (73.68% versus 71.37%) and statistically signiï¬cantly higher test set accuracy on ResNet (72.04% versus 61.34%). Radford et al. (2018) employed AdamW to train Transformer (Vaswani et al., 2017) architectures to obtain new | 1711.05101#37 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 38 | versus 61.34%). Radford et al. (2018) employed AdamW to train Transformer (Vaswani et al., 2017) architectures to obtain new state-of-the-art results on a wide range of benchmarks for natural language understanding. Zhang et al. (2018) compared L2 reg- ularization vs. weight decay for SGD, Adam and the Kronecker-Factored Approximate Curvature (K-FAC) optimizer (Martens & Grosse, 2015) on the CIFAR datasets with ResNet and VGG archi- tectures, reporting that decoupled weight decay consistently outperformed L2 regularization in cases where they differ. | 1711.05101#38 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 39 | 8
Published as a conference paper at ICLR 2019
# 5 CONCLUSION AND FUTURE WORK
Following suggestions that adaptive gradient methods such as Adam might lead to worse generaliza- tion than SGD with momentum (Wilson et al., 2017), we identiï¬ed and exposed the inequivalence of L2 regularization and weight decay for Adam. We empirically showed that our version of Adam with decoupled weight decay yields substantially better generalization performance than the com- mon implementation of Adam with L2 regularization. We also proposed to use warm restarts for Adam to improve its anytime performance.
Our results obtained on image classiï¬cation datasets must be veriï¬ed on a wider range of tasks, especially ones where the use of regularization is expected to be important. It would be interesting to integrate our ï¬ndings on weight decay into other methods which attempt to improve Adam, e.g, normalized direction-preserving Adam (Zhang et al., 2017). While we focused our experimental analysis on Adam, we believe that similar results also hold for other adaptive gradient methods, such as AdaGrad (Duchi et al., 2011) and AMSGrad (Reddi et al., 2018).
# 6 ACKNOWLEDGMENTS | 1711.05101#39 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 40 | # 6 ACKNOWLEDGMENTS
We thank Patryk Chrabaszcz for help with running experiments with ImageNet32x32; Matthias Feurer and Robin Schirrmeister for providing valuable feedback on this paper in several iterations; and Martin V¨olker, Robin Schirrmeister, and Tonio Ball for providing us with a comparison of AdamW and Adam on their EEG data. We also thank the following members of the deep learning community for implementing decoupled weight decay in various deep learning libraries:
⢠Jingwei Zhang, Lei Tai, Robin Schirrmeister, and Kashif Rasul for their implementations in PyTorch (see https://github.com/pytorch/pytorch/pull/4429)
⢠Phil Jund for his implementation in TensorFlow described at
https://www.tensorflow.org/api_docs/python/tf/contrib/opt/ DecoupledWeightDecayExtension
Sylvain Gugger, Anand Saha, Jeremy Howard and other members of fast.ai for their imple- mentation available at https://github.com/sgugger/Adam-experiments ⢠Guillaume Lambard for his implementation in Keras available at https://github.
com/GLambard/AdamW_Keras | 1711.05101#40 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 41 | com/GLambard/AdamW_Keras
⢠Yagami Lin for his implementation in Caffe available at https://github.com/ Yagami123/Caffe-AdamW-AdamWR
This work was supported by the European Research Council (ERC) under the European Unionâs Horizon 2020 research and innovation programme under grant no. 716721, by the German Research Foundation (DFG) under the BrainLinksBrainTools Cluster of Excellence (grant number EXC 1086) and through grant no. INST 37/935-1 FUGG, and by the German state of Baden-W¨urttemberg through bwHPC.
# REFERENCES
Laurence Aitchison. A uniï¬ed theory of adaptive stochastic gradient descent as Bayesian ï¬ltering. arXiv:1507.02030, 2018.
Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. A downsampled variant of ImageNet as an alternative to the CIFAR datasets. arXiv:1707.08819, 2017.
Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501, 2018. | 1711.05101#41 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 42 | Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. arXiv:1703.04933, 2017.
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121â2159, 2011.
9
Published as a conference paper at ICLR 2019
Xavier Gastaldi. Shake-Shake regularization. arXiv preprint arXiv:1705.07485, 2017.
Stephen Jos´e Hanson and Lorien Y Pratt. Comparing biases for minimal network construction with In Proceedings of the 1st International Conference on Neural Information back-propagation. Processing Systems, pp. 177â185, 1988.
Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E Hopcroft, and Kilian Q Weinberger. Snapshot ensembles: Train 1, get m for free. arXiv:1704.00109, 2017.
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Pe- ter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv:1609.04836, 2016. | 1711.05101#42 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 43 | Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.
Hao Li, Zheng Xu, Gavin Taylor, and Tom Goldstein. Visualizing the loss landscape of neural nets. arXiv preprint arXiv:1712.09913, 2017.
Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with warm restarts. arXiv:1608.03983, 2016.
James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In International conference on machine learning, pp. 2408â2417, 2015.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434, 2015.
Improving language un- derstanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/research-covers/language-unsupervised/language understanding paper. pdf, 2018. | 1711.05101#43 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 44 | Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. Inter- national Conference on Learning Representations, 2018.
Leslie N Smith. Cyclical learning rates for training neural networks. arXiv:1506.01186v3, 2016.
Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26â 31, 2012.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, In Advances in Neural Infor- Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. mation Processing Systems, pp. 5998â6008, 2017.
Martin V¨olker, JiËr´ı Hammer, Robin T Schirrmeister, Joos Behncke, Lukas DJ Fiederer, Andreas Schulze-Bonhage, Petr MarusiËc, Wolfram Burgard, and Tonio Ball. Intracranial error detection via deep learning. arXiv preprint arXiv:1805.01667, 2018. | 1711.05101#44 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 45 | Jianfeng Wang, Ye Yuan, Gang Yu, and Sun Jian. Sface: An efï¬cient network for face detection in large scale variations. arXiv preprint arXiv:1804.06559, 2018.
Ashia C Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, and Benjamin Recht. The marginal value of adaptive gradient methods in machine learning. arXiv:1705.08292, 2017.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pp. 2048â2057, 2015.
Shuo Yang, Ping Luo, Chen-Change Loy, and Xiaoou Tang. Wider face: A face detection bench- mark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5525â5533, 2016.
10
Published as a conference paper at ICLR 2019
Guodong Zhang, Chaoqi Wang, Bowen Xu, and Roger Grosse. Three mechanisms of weight decay regularization. arXiv preprint arXiv:1810.12281, 2018. | 1711.05101#45 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 46 | Zijun Zhang, Lin Ma, Zongpeng Li, and Chuan Wu. Normalized direction-preserving adam. arXiv:1709.04546, 2017.
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. Learning transferable architectures for scalable image recognition. In arXiv:1707.07012 [cs.CV], 2017.
11
Published as a conference paper at ICLR 2019
# Appendix
# A FORMAL ANALYSIS OF WEIGHT DECAY VS L2 REGULARIZATION
Proof of Proposition 1 The proof for this well-known fact is straight-forward. SGD without weight decay has the following iterates on f reg
f;(0) + x \|O 3: G41 â 8 â AV SE"
G41 â 8 â AV SE" (01) = 8; â AV f:(O1) â aN'O,. (5)
SGD with weight decay has the following iterates on ft(θ):
θt+1 â (1 â λ)θt â αâft(θt). (6)
=A a These iterates are identical since \â | 1711.05101#46 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 47 | =A a These iterates are identical since \â
Proof of Proposition 2 Similarly to the proof of Proposition 1, the iterates of O without weight decay on f;""(0) 3N \|Ol|> and O with weight decay \ on f; are, respectively:
= f;(@) +
O41 â O,-âa0NMiO, â aMLV fi(@). O41 (1âA)O â OMEV fi(®).
O41 â O,-âa0NMiO, â aMLV fi(@). (7) O41 (1âA)O â OMEV fi(®). (8) The equality of these iterates for all @, would imply A@,; = a\âM,6;. This can only hold for all 6; if M, = kI, with k ⬠R, which is not the case for O. Therefore, no Lo regularizer \â ||; exists that makes the iterates equivalent.
Proof of Proposition 3 O without weight decay has the following iterates on f sreg
â
# ao
= f;(0) + x
O without weight decay has the following iterates on f;""°(0) = f;(0) + x ao vs:
# t
# θt+1 â θt â αâf sreg
# t | 1711.05101#47 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 48 | # t
# θt+1 â θt â αâf sreg
# t
# (θt)/s
0, â aV fi(O1)/s â aX O, © s/s 0, â aV fi(O1)/s â ad'O:,
0, â aV fi(O1)/s â aX O, © s/s (10)
0, â aV fi(O1)/s â ad'O:, (11)
(9) (10) (11)
where the division by s is element-wise. O with weight decay has the following iterates on f;,(8): O41 â (1-A)O, â aVS(A)/s (12) = 0âaVf(O)/s â Or, (13) d These iterates are identical since \â = a*
(12) (13)
# B ADDITIONAL PRACTICAL IMPROVEMENTS OF ADAM
Having discussed decoupled weight decay for improving Adamâs generalization, in this section we introduce two additional components to improve Adamâs performance in practice.
B.1 NORMALIZED WEIGHT DECAY | 1711.05101#48 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 49 | B.1 NORMALIZED WEIGHT DECAY
Our preliminary experiments showed that different weight decay factors are optimal for different computational budgets (deï¬ned in terms of the number of batch passes). Relatedly, Li et al. (2017) demonstrated that a smaller batch size (for the same total number of epochs) leads to the shrinking effect of weight decay being more pronounced. Here, we propose to reduce this dependence by nor- malizing the values of weight decay. Speciï¬cally, we replace the hyperparameter λ by a new (more robust) normalized weight decay hyperparameter λnorm, and use this to set λ as λ = λnorm BT , where b is the batch size, B is the total number of training points and T is the total number of epochs.2 Thus, λnorm can be interpreted as the weight decay used if only one batch pass is al- lowed. We emphasize that our choice of normalization is merely one possibility informed by few experiments; a more lasting conclusion we draw is that using some normalization can substantially improve results.
2In the context of our AdamWR variant discussed in Section B.2, T is the total number of epochs in the current restart.
1
Published as a conference paper at ICLR 2019
B.2 ADAM WITH COSINE ANNEALING AND WARM RESTARTS | 1711.05101#49 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 50 | 1
Published as a conference paper at ICLR 2019
B.2 ADAM WITH COSINE ANNEALING AND WARM RESTARTS
We now apply cosine annealing and warm restarts to Adam, following our recent work (Loshchilov & Hutter, 2016). There, we proposed Stochastic Gradient Descent with Warm Restarts (SGDR) to improve the anytime performance of SGD by quickly cooling down the learning rate according to a cosine schedule and periodically increasing it. SGDR has been successfully adopted to lead to new state-of-the-art results for popular image classiï¬cation benchmarks (Huang et al., 2017; Gastaldi, 2017; Zoph et al., 2017), and we therefore already tried extending it to Adam shortly after proposing it. However, while our initial version of Adam with warm restarts had better anytime performance than Adam, it was not competitive with SGD with warm restarts, precisely because L2 regularization was not working as well as in SGD. Now, having ï¬xed this issue by means of the original weight decay regularization (Section 2) and also having introduced normalized weight decay (Section B.1), our original work on cosine annealing and warm restarts directly carries over to Adam. | 1711.05101#50 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 51 | In the interest of keeping the presentation self-contained, we brieï¬y describe how SGDR schedules the change of the effective learning rate in order to accelerate the training of DNNs. Here, we decouple the initial learning rate α and its multiplier ηt used to obtain the actual learning rate at iteration t (see, e.g., line 8 in Algorithm 1). In SGDR, we simulate a new warm-started run/restart of SGD once Ti epochs are performed, where i is the index of the run. Importantly, the restarts are not performed from scratch but emulated by increasing ηt while the old value of θt is used as an initial solution. The amount by which ηt is increased controls to which extent the previously acquired information (e.g., momentum) is used. Within the i-th run, the value of ηt decays according to a cosine annealing (Loshchilov & Hutter, 2016) learning rate for each batch as follows:
ηt = η(i) max â η(i) min + 0.5(η(i) min)(1 + cos(ÏTcur/Ti)), (14) | 1711.05101#51 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 52 | where η(i) max are ranges for the multiplier and Tcur accounts for how many epochs have been performed since the last restart. Tcur is updated at each batch iteration t and is thus not constrained to integer values. Adjusting (e.g., decreasing) η(i) max at every i-th restart (see also Smith (2016)) could potentially improve performance, but we do not consider that option here because it would involve additional hyperparameters. For η(i) min = 0, one can simplify Eq. (14) to
ηt = 0.5 + 0.5 cos(ÏTcur/Ti). (15)
In order to achieve good anytime performance, one can start with an initially small Ti (e.g., from 1% to 10% of the expected total budget) and multiply it by a factor of Tmult (e.g., Tmult = 2) at every restart. The (i + 1)-th restart is triggered when Tcur = Ti by setting Tcur to 0. An example setting of the schedule multiplier is given in C. | 1711.05101#52 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 53 | Our proposed AdamWR algorithm represents AdamW (see Algorithm 2) with ηt following Eq. (15) and λ computed at each iteration using normalized weight decay described in Section B.1. We note that normalized weight decay allowed us to use a constant parameter setting across short and long runs performed within AdamWR and SGDWR (SGDW with warm restarts).
# C AN EXAMPLE SETTING OF THE SCHEDULE MULTIPLIER
An example schedule of the schedule multiplier ηt is given in SuppFigure 1 for Ti=0 = 100 and Tmult = 2. After the initial 100 epochs the learning rate will reach 0 because ηt=100 = 0. Then, since Tcur = Ti=0, we restart by resetting Tcur = 0, causing the multiplier ηt to be reset to 1 due to Eq. (15). This multiplier will then decrease again from 1 to 0, but now over the course of 200 epochs because Ti=1 = Ti=0Tmult = 200. Solutions obtained right before the restarts, when ηt = 0 (e.g., at epoch indexes 100, 300, 700 and 1500 as shown in SuppFigure 1) are recommended by the optimizer as the solutions, with more recent solutions prioritized.
# D ADDITIONAL RESULTS | 1711.05101#53 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 54 | # D ADDITIONAL RESULTS
We investigated whether the use of much longer runs (1800 epochs) of âstandard Adamâ (Adam with L2 regularization and a ï¬xed learning rate) makes the use of cosine annealing unnecessary.
2
Published as a conference paper at ICLR 2019
Leaming rate multiplier n 200 400 600 800 1000 1200 1400 Epochs
SuppFigure 1: An example schedule of the learning rate multiplier as a function of epoch index. The ï¬rst run is scheduled to converge at epoch Ti=0 = 100, then the budget for the next run is doubled as Ti=1 = Ti=0Tmult = 200, etc.
SuppFigure 2 shows the results of standard Adam for a 4 by 4 logarithmic grid of hyperparame- ter settings (the coarseness of the grid is due to the high computational expense of runs for 1800 epochs). Even after taking the low resolution of the grid into account, the results appear to be at best comparable to the ones obtained with AdamW with 18 times less epochs and a smaller network (see SuppFigure 3, top row, middle). These results are not very surprising given Figure 1 in the main paper (which demonstrates both the improvements possible by using some learning rate schedule, such as cosine annealing, and the effectiveness of decoupled weight decay). | 1711.05101#54 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 55 | Our experimental results with Adam and SGD suggest that the total runtime in terms of the number of epochs affect the basin of optimal hyperparameters (see SuppFigure 3). More speciï¬cally, the greater the total number of epochs the smaller the values of the weight decay should be. SuppFigure 4 shows that our remedy for this problem, the normalized weight decay deï¬ned in Eq. (15), sim- pliï¬es hyperparameter selection because the optimal values observed for short runs are similar to the ones for much longer runs. We used our initial experiments on CIFAR-10 to suggest the square root normalization we proposed in Eq. (15) and double-checked that this is not a coincidence on the ImageNet32x32 dataset (Chrabaszcz et al., 2017), a downsampled version of the original ImageNet dataset with 1.2 million 32Ã32 pixels images, where an epoch is 24 times longer than on CIFAR-10. This experiment also supported the square root scaling: the best values of the normalized weight de- cay observed on CIFAR-10 represented nearly optimal values for ImageNet32x32 (see SuppFigure 3). In contrast, had we used the same raw | 1711.05101#55 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
1711.05101 | 56 | de- cay observed on CIFAR-10 represented nearly optimal values for ImageNet32x32 (see SuppFigure 3). In contrast, had we used the same raw weight decay values λ for ImageNet32x32 as for CIFAR- 10 and for the same number of epochs, without the proposed normalization, λ would have been roughly 5 times too large for ImageNet32x32, leading to much worse performance. The optimal normalized weight decay values were also very similar (e.g., λnorm = 0.025 and λnorm = 0.05) across SGDW and AdamW. These results clearly show that normalizing weight decay can substan- tially improve performance; while square root scaling performed very well in our experiments we emphasize that these experiments were not very comprehensive and that even better scaling rules are likely to exist. | 1711.05101#56 | Decoupled Weight Decay Regularization | L$_2$ regularization and weight decay regularization are equivalent for
standard stochastic gradient descent (when rescaled by the learning rate), but
as we demonstrate this is \emph{not} the case for adaptive gradient algorithms,
such as Adam. While common implementations of these algorithms employ L$_2$
regularization (often calling it "weight decay" in what may be misleading due
to the inequivalence we expose), we propose a simple modification to recover
the original formulation of weight decay regularization by \emph{decoupling}
the weight decay from the optimization steps taken w.r.t. the loss function. We
provide empirical evidence that our proposed modification (i) decouples the
optimal choice of weight decay factor from the setting of the learning rate for
both standard SGD and Adam and (ii) substantially improves Adam's
generalization performance, allowing it to compete with SGD with momentum on
image classification datasets (on which it was previously typically
outperformed by the latter). Our proposed decoupled weight decay has already
been adopted by many researchers, and the community has implemented it in
TensorFlow and PyTorch; the complete source code for our experiments is
available at https://github.com/loshchil/AdamW-and-SGDW | http://arxiv.org/pdf/1711.05101 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | Published as a conference paper at ICLR 2019 | null | cs.LG | 20171114 | 20190104 | [
{
"id": "1810.12281"
},
{
"id": "1705.07485"
},
{
"id": "1507.02030"
},
{
"id": "1609.04836"
},
{
"id": "1707.08819"
},
{
"id": "1805.09501"
},
{
"id": "1704.00109"
},
{
"id": "1712.09913"
},
{
"id": "1705.08292"
},
{
"id": "1707.07012"
},
{
"id": "1703.04933"
},
{
"id": "1608.03983"
},
{
"id": "1804.06559"
},
{
"id": "1506.01186"
},
{
"id": "1805.01667"
},
{
"id": "1709.04546"
},
{
"id": "1511.06434"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.