doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1609.06038 | 29 | We ensemble our ESIM model with syntactic tree-LSTMs (Zhu et al., 2015) based on syntactic parse trees and achieve signiï¬cant improvement over our best sequential encoding model ESIM, at- taining an accuracy of 88.6%. This shows that syn- tactic tree-LSTMs complement well with ESIM.
93.5 88.6 (17) HIM (ESIM + syn.tree) 91.9 88.2 (18) ESIM + tree 92.6 88.0 (16) ESIM 92.9 87.1 (19) ESIM - ave./max (20) ESIM - diff./prod. 91.5 87.0 (21) ESIM - inference BiLSTM 91.3 87.3 (22) ESIM - encoding BiLSTM 88.7 86.3 91.6 87.2 (23) ESIM - P-based attention 91.4 86.5 (24) ESIM - H-based attention 92.9 87.8 (25) syn.tree
Table 2: Ablation performance of the models.
The table shows that our ESIM model achieves an accuracy of 88.0%, which has already outper- formed all the previous models, including those using much more complicated network architec- tures (Munkhdalai and Yu, 2016b). | 1609.06038#29 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 30 | Ablation analysis We further analyze the ma- jor components that are of importance to help us achieve good performance. From the best model, we ï¬rst replace the syntactic tree-LSTM with the full tree-LSTM without encoding syntactic parse information. More speciï¬cally, two adjacent words in a sentence are merged to form a parent node, and
# 2 A
1 - 3 - 5 - 7 - 8 - 21 - 9 - 16 - 23 - 10 - 18 - 25 - 12 - 27 - 4 man 6 wearing 11 a 13 white 14 shirt 15 and 17 a 19 blue 20 jeans 22 reading 24 a 26 newspaper 28 while 29 standing
(a) Binarized constituency tree of premise
1 2 - 6 - 8 - 5 - 9 - 12 - 14 - 3 A 4 man 7 is 10 sitting 11 down 13 reading 15 a 16 newspaper
# 3 A
17 .
(b) Binarized constituency tree of hypothesis | 1609.06038#30 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 32 | (d) Input gate of tree-LSTM in inference composi- tion (l2-norm)
pscoeeeoeeedtt oO : yy % 6 & Ge? %, tS © 2 S44 e 2, % % °° S Sy a] NI e & es S %, ie
(e) Input gate of BiLSTM in inference composition (l2-norm)
<bosx ". é man itting a sitting + dows | reading ary newspaper <eoss ; ee IB? Ae ee? BERe GAB & °% Ger CVe Bese ae & BN iS s a & a
(c) Normalized attention weights of tree-LSTM
(f) Normalized attention weights of BiLSTM | 1609.06038#32 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 33 | (c) Normalized attention weights of tree-LSTM
(f) Normalized attention weights of BiLSTM
Figure 3: An example for analysis. Subï¬gures (a) and (b) are the constituency parse trees of the premise and hypothesis, respectively. â-â means a non-leaf or a null node. Subï¬gures (c) and (f) are attention visualization of the tree model and ESIM, respectively. The darker the color, the greater the value. The premise is on the x-axis and the hypothesis is on y-axis. Subï¬gures (d) and (e) are input gatesâ l2-norm of tree-LSTM and BiLSTM in inference composition, respectively.
this process continues and results in a full binary tree, where padding nodes are inserted when there are no enough leaves to form a full tree. Each tree node is implemented with a tree-LSTM block (Zhu et al., 2015) same as in model (17). Table 2 shows that with this replacement, the performance drops
_ â_ _
# to 88.2%.
Furthermore, we note the importance of the layer performing the enhancement for local inference in- formation in Section 3.2 and the pooling layer in inference composition in Section 3.3. Table 2 sug- gests that the NLI task seems very sensitive to the | 1609.06038#33 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 34 | layers. If we remove the pooling layer in infer- ence composition and replace it with summation as in Parikh et al. (2016), the accuracy drops to 87.1%. If we remove the difference and element- wise product from the local inference enhancement layer, the accuracy drops to 87.0%. To provide some detailed comparison with Parikh et al. (2016), replacing bidirectional LSTMs in inference compo- sition and also input encoding with feedforward neural network reduces the accuracy to 87.3% and 86.3% respectively.
The difference between ESIM and each of the other models listed in Table 2 is statistically signif- icant under the one-tailed paired t-test at the 99% signiï¬cance level. The difference between model (17) and (18) is also signiï¬cant at the same level. Note that we cannot perform signiï¬cance test be- tween our models with the other models listed in Table 1 since we do not have the output of the other models. | 1609.06038#34 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 35 | If we remove the premise-based attention from ESIM (model 23), the accuracy drops to 87.2% on the test set. The premise-based attention means when the system reads a word in a premise, it uses soft attention to consider all relevant words in hy- pothesis. Removing the hypothesis-based atten- tion (model 24) decrease the accuracy to 86.5%, where hypothesis-based attention is the attention performed on the other direction for the sentence pairs. The results show that removing hypothesis- based attention affects the performance of our model more, but removing the attention from the other direction impairs the performance too.
The stand-alone syntactic tree-LSTM model achieves an accuracy of 87.8%, which is compa- rable to that of ESIM. We also computed the or- acle score of merging syntactic tree-LSTM and ESIM, which picks the right answer if either is right. Such an oracle/upper-bound accuracy on test set is 91.7%, which suggests how much tree-LSTM and ESIM could ideally complement each other. As far as the speed is concerned, training tree-LSTM takes about 40 hours on Nvidia-Tesla K40M and ESIM takes about 6 hours, which is easily extended to larger scale of data. | 1609.06038#35 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 36 | Further analysis We showed that encoding syn- tactic parsing information helps recognize natural language inferenceâit additionally improves the strong system. Figure 3 shows an example where tree-LSTM makes a different and correct decision. In subï¬gure (d), the larger values at the input gates
on nodes 9 and 10 indicate that those nodes are important in making the ï¬nal decision. We observe that in subï¬gure (c), nodes 9 and 10 are aligned to node 29 in the premise. Such information helps the system decide that this pair is a contradiction. Ac- cordingly, in subï¬gure (e) of sequential BiLSTM, the words sitting and down do not play an impor- tant role for making the ï¬nal decision. Subï¬gure (f) shows that sitting is equally aligned with reading and standing and the alignment for word down is not that useful.
# 6 Conclusions and Future Work | 1609.06038#36 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 37 | # 6 Conclusions and Future Work
We propose neural network models for natural lan- guage inference, which achieve the best results reported on the SNLI benchmark. The results are ï¬rst achieved through our enhanced sequential in- ference model, which outperformed the previous models, including those employing more compli- cated network architectures, suggesting that the potential of sequential inference models have not been fully exploited yet. Based on this, we further show that by explicitly considering recursive ar- chitectures in both local inference modeling and inference composition, we achieve additional im- provement. Particularly, incorporating syntactic parsing information contributes to our best result: it further improves the performance even when added to the already very strong model.
Future work interesting to us includes exploring the usefulness of external resources such as Word- Net and contrasting-meaning embedding (Chen et al., 2015) to help increase the coverage of word- level inference relations. Modeling negation more closely within neural network frameworks (Socher et al., 2013; Zhu et al., 2014) may help contradic- tion detection.
# Acknowledgments | 1609.06038#37 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 38 | # Acknowledgments
The ï¬rst and the third author of this paper were supported in part by the Science and Technology Development of Anhui Province, China (Grants No. 2014z02006), the Fundamental Research Funds for the Central Universities (Grant No. WK2350000001) and the Strategic Priority Re- search Program of the Chinese Academy of Sci- ences (Grant No. XDB02070006).
# References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. http://arxiv.org/abs/1409.0473.
Samuel Bowman, Gabor Angeli, Christopher Potts, and D. Christopher Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Associa- tion for Computational Linguistics, pages 632â642. https://doi.org/10.18653/v1/D15-1075. | 1609.06038#38 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 39 | Samuel Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, D. Christopher Manning, and Christopher Potts. 2016. A fast uniï¬ed model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1466â1477. https://doi.org/10.18653/v1/P16-1139.
William Chan, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversa- In 2016 IEEE Interna- tional speech recognition. tional Conference on Acoustics, Speech and Sig- nal Processing, ICASSP 2016, Shanghai, China, March 20-25, 2016. IEEE, pages 4960â4964. https://doi.org/10.1109/ICASSP.2016.7472621. | 1609.06038#39 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 40 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural net- works for modeling document. In Subbarao Kamb- hampati, editor, Proceedings of the Twenty-Fifth International Joint Conference on Artiï¬cial Intel- ligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016. IJCAI/AAAI Press, pages 2754â2760. http://www.ijcai.org/Abstract/16/391.
Zhigang Chen, Wei Lin, Qian Chen, Xiaoping Chen, Si Wei, Hui Jiang, and Xiaodan Zhu. 2015. Re- visiting word embedding for contrasting meaning. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers). Associ- ation for Computational Linguistics, pages 106â115. https://doi.org/10.3115/v1/P15-1011. | 1609.06038#40 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 41 | Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 551â561. http://aclweb.org/anthology/D16-1053.
Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the proper- ties of neural machine translation: Encoder-decoder approaches. In Dekai Wu, Marine Carpuat, Xavier Carreras, and Eva Maria Vecchi, editors, Proceed- ings of SSST@EMNLP 2014, Eighth Workshop on
Syntax, Semantics and Structure in Statistical Trans- lation, Doha, Qatar, 25 October 2014. Associ- ation for Computational Linguistics, pages 103â 111. http://aclweb.org/anthology/W/W14/W14- 4012.pdf. | 1609.06038#41 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 42 | Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, and Yoshua Bengio. 2015. Kyunghyun Cho, Attention-based models for speech recognition. In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, ed- itors, Advances in Neural Information Process- ing Systems 28: Annual Conference on Neu- ral Information Processing Systems 2015, De- cember 7-12, 2015, Montreal, Quebec, Canada. http://papers.nips.cc/paper/5847- pages 577â585. attention-based-models-for-speech-recognition.
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges, Eval- uating Predictive Uncertainty, Visual Object Classi- ï¬cation and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, April 11-13, 2005, Revised Selected Papers. pages 177â190. | 1609.06038#42 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 43 | Lorenzo Ferrone and Massimo Fabio Zanzotto. 2014. Towards syntax-aware compositional distributional semantic models. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. Dublin City Univer- sity and Association for Computational Linguistics, http://aclweb.org/anthology/C14- pages 721â730. 1068.
and Long ber. Computation Neural https://doi.org/10.1162/neco.1997.9.8.1735.
Adrian Iftene and Alexandra Balahur-Dobrescu. 2007. Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, Association for Computational Linguistics, chapter Hypothe- sis Transformation and Semantic Variability Rules Used in Recognizing Textual Entailment, pages 125â 130. http://aclweb.org/anthology/W07-1421.
Diederik P. Kingma and Jimmy Ba. 2014. Adam: CoRR A method for stochastic optimization. abs/1412.6980. http://arxiv.org/abs/1412.6980. | 1609.06038#43 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 44 | Dan Klein and Christopher D. Manning. 2003. Ac- curate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computa- tional Linguistics. http://aclweb.org/anthology/P03- 1054.
Phong Le and Willem Zuidema. 2015. Compositional distributional semantics with long short term mem- ory. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics. Associ- ation for Computational Linguistics, pages 10â19. https://doi.org/10.18653/v1/S15-1002.
Yang Liu, Chengjie Sun, Lei Lin, and Xiao- language bidirectional LSTM model CoRR abs/1605.09090. long Wang. 2016. inference and inner-attention. http://arxiv.org/abs/1605.09090. Learning natural using
Bill MacCartney. 2009. Natural Language Inference. Ph.D. thesis, Stanford University. | 1609.06038#44 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 45 | Bill MacCartney. 2009. Natural Language Inference. Ph.D. thesis, Stanford University.
Bill MacCartney and Christopher D. Manning. Modeling semantic containment and 2008. In language inference. exclusion in natural Proceedings of the 22Nd International Confer- ence on Computational Linguistics - Volume 1. Association for Computational Linguistics, Strouds- burg, PA, USA, COLING â08, pages 521â528. http://dl.acm.org/citation.cfm?id=1599081.1599147.
Yashar Mehdad, Alessandro Moschitti, and Mas- simo Fabio Zanzotto. 2010. Syntactic/semantic In structures for textual entailment recognition. Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Associ- ation for Computational Linguistics, pages 1020â 1028. http://aclweb.org/anthology/N10-1146. | 1609.06038#45 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 46 | Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuris- the 54th An- In Proceedings of tic matching. nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Associa- tion for Computational Linguistics, pages 130â136. https://doi.org/10.18653/v1/P16-2022.
Tsendsuren Munkhdalai and Hong Yu. 2016a. Neu- CoRR abs/1607.04315. ral semantic encoders. http://arxiv.org/abs/1607.04315.
Tsendsuren Munkhdalai and Hong Yu. 2016b. Neu- ral tree indexers for text understanding. CoRR abs/1607.04492. http://arxiv.org/abs/1607.04492.
Biswajit Paria, K. M. Annervaz, Ambedkar Dukkipati, Ankush Chatterjee, and Sanjay Podder. 2016. A neu- ral architecture mimicking humans end-to-end for natural language inference. CoRR abs/1611.04741. http://arxiv.org/abs/1611.04741. | 1609.06038#46 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 47 | Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention In Proceed- model for natural language inference. ings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, pages 2249â2255. http://aclweb.org/anthology/D16-1244.
Barbara Partee. 1995. Lexical semantics and composi- tionality. Invitation to Cognitive Science 1:311â360.
Jeffrey Pennington, Richard Socher, and Christo- GloVe: Global vectors pher Manning. 2014. In Proceedings of the for word representation. 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP). Association
for Computational Linguistics, pages 1532â1543. https://doi.org/10.3115/v1/D14-1162.
Tim Rocktäschel, Edward Grefenstette, Karl Moritz and Phil Blun- entailment about CoRR abs/1509.06664. | 1609.06038#47 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 48 | Tim Rocktäschel, Edward Grefenstette, Karl Moritz and Phil Blun- entailment about CoRR abs/1509.06664.
Alexander Rush, Sumit Chopra, and Jason We- ston. 2015. A neural attention model for ab- In Proceed- stractive sentence summarization. ings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing. Associa- tion for Computational Linguistics, pages 379â389. https://doi.org/10.18653/v1/D15-1044.
Lei Sha, Baobao Chang, Zhifang Sui, and Sujian Li. 2016. Reading and thinking: Re-read LSTM unit In Proceedings for textual entailment recognition. of COLING 2016, the 26th International Confer- ence on Computational Linguistics: Technical Pa- pers. The COLING 2016 Organizing Committee, pages 2870â2879. http://aclweb.org/anthology/C16- 1270. | 1609.06038#48 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 49 | Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing natu- ral scenes and natural language with recursive neu- In Lise Getoor and Tobias Scheffer, ral networks. editors, Proceedings of the 28th International Con- ference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011. Omnipress, pages 129â136.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D. Christopher Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- In Proceedings of the 2013 Conference on bank. Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1631â1642. http://aclweb.org/anthology/D13-1170. | 1609.06038#49 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 50 | Sheng Kai Tai, Richard Socher, and D. Christopher Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- In Proceedings of the 53rd Annual Meet- works. ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 1556â1566. https://doi.org/10.3115/v1/P15-1150.
Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Order-embeddings of CoRR abs/1511.06361. Raquel Urtasun. 2015. images and language. http://arxiv.org/abs/1511.06361.
Shuohang Wang and Jing Jiang. 2016. Learning nat- In Proceed- ural language inference with LSTM. ings of the 2016 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies. Asso- ciation for Computational Linguistics, pages 1442â 1451. https://doi.org/10.18653/v1/N16-1170. | 1609.06038#50 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.06038 | 51 | Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdi- and Yoshua Bengio. nov, Richard S. Zemel, image 2015. Show, attend and tell: Neural caption generation with visual attention. In the 32nd International Confer- Proceedings of ICML 2015, Lille, ence on Machine Learning, France, 2015. 2048â2057. pages http://jmlr.org/proceedings/papers/v37/xuc15.html.
Junbei Zhang, Xiaodan Zhu, Qian Chen, Lirong Ex- adapta- an- abs/arXiv:1703.04617v2.
Xiaodan Zhu, Hongyu Guo, Saif Mohammad, and Svet- lana Kiritchenko. 2014. An empirical study on the effect of negation words on sentiment. In Proceed- ings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 304â313. https://doi.org/10.3115/v1/P14- 1029. | 1609.06038#51 | Enhanced LSTM for Natural Language Inference | Reasoning and inference are central to human and artificial intelligence.
Modeling inference in human language is very challenging. With the availability
of large annotated data (Bowman et al., 2015), it has recently become feasible
to train neural network based inference models, which have shown to be very
effective. In this paper, we present a new state-of-the-art result, achieving
the accuracy of 88.6% on the Stanford Natural Language Inference Dataset.
Unlike the previous top models that use very complicated network architectures,
we first demonstrate that carefully designing sequential inference models based
on chain LSTMs can outperform all previous models. Based on this, we further
show that by explicitly considering recursive architectures in both local
inference modeling and inference composition, we achieve additional
improvement. Particularly, incorporating syntactic parsing information
contributes to our best result---it further improves the performance even when
added to the already very strong model. | http://arxiv.org/pdf/1609.06038 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen | cs.CL | ACL 2017 | null | cs.CL | 20160920 | 20170426 | [
{
"id": "1703.04617"
}
] |
1609.04747 | 0 | 7 1 0 2
n u J 5 1 ] G L . s c [
2 v 7 4 7 4 0 . 9 0 6 1 : v i X r a
# An overview of gradient descent optimization algorithmsâ
Sebastian Ruder Insight Centre for Data Analytics, NUI Galway Aylien Ltd., Dublin [email protected]
# Abstract
Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. In the course of this overview, we look at different variants of gradient descent, summarize challenges, introduce the most common optimization algorithms, review architectures in a parallel and distributed setting, and investigate additional strategies for optimizing gradient descent.
# Introduction
Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. At the same time, every state-of-the-art Deep Learning library contains implementations of various algorithms to optimize gradient descent (e.g. lasagneâs2, caffeâs3, and kerasâ4 documentation). These algorithms, however, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. | 1609.04747#0 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 0 | 7 1 0 2
# b e F 9
] G L . s c [
2 v 6 3 8 4 0 . 9 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# ON LARGE-BATCH TRAINING FOR DEEP LEARNING: GENERALIZATION GAP AND SHARP MINIMA
Nitish Shirish Keskarâ Northwestern University Evanston, IL 60208 [email protected]
Dheevatsa Mudigere Intel Corporation Bangalore, India [email protected]
Jorge Nocedal Northwestern University Evanston, IL 60208 [email protected]
Mikhail Smelyanskiy Intel Corporation Santa Clara, CA 95054 [email protected]
# Ping Tak Peter Tang Intel Corporation Santa Clara, CA 95054 [email protected]
# ABSTRACT | 1609.04836#0 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 1 | This article aims at providing the reader with intuitions with regard to the behaviour of different algorithms for optimizing gradient descent that will help her put them to use. In Section 2, we are ï¬rst going to look at the different variants of gradient descent. We will then brieï¬y summarize challenges during training in Section 3. Subsequently, in Section 4, we will introduce the most common optimization algorithms by showing their motivation to resolve these challenges and how this leads to the derivation of their update rules. Afterwards, in Section 5, we will take a short look at algorithms and architectures to optimize gradient descent in a parallel and distributed setting. Finally, we will consider additional strategies that are helpful for optimizing gradient descent in Section 6.
Gradient descent is a way to minimize an objective function J(θ) parameterized by a modelâs parameters θ â Rd by updating the parameters in the opposite direction of the gradient of the objective function âθJ(θ) w.r.t. to the parameters. The learning rate η determines the size of the steps we take to reach a (local) minimum. In other words, we follow the direction of the slope of the surface created by the objective function downhill until we reach a valley.5 | 1609.04747#1 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 1 | # Ping Tak Peter Tang Intel Corporation Santa Clara, CA 95054 [email protected]
# ABSTRACT
The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say 32â512 data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize. We investigate the cause for this generaliza- tion drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functionsâand as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to ï¬at min- imizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap.
1
# INTRODUCTION | 1609.04836#1 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 2 | âThis paper originally appeared as a blog post at http://sebastianruder.com/ optimizing-gradient-descent/index.html on 19 January 2016.
2http://lasagne.readthedocs.org/en/latest/modules/updates.html 3http://caffe.berkeleyvision.org/tutorial/solver.html 4http://keras.io/optimizers/ 5If you are unfamiliar with gradient descent, you can ï¬nd a good introduction on optimizing neural networks
at http://cs231n.github.io/optimization-1/.
# 2 Gradient descent variants
There are three variants of gradient descent, which differ in how much data we use to compute the gradient of the objective function. Depending on the amount of data, we make a trade-off between the accuracy of the parameter update and the time it takes to perform an update.
# 2.1 Batch gradient descent
Vanilla gradient descent, aka batch gradient descent, computes the gradient of the cost function w.r.t. to the parameters θ for the entire training dataset:
θ = θ â η · âθJ(θ) (1) | 1609.04747#2 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 2 | 1
# INTRODUCTION
Deep Learning has emerged as one of the cornerstones of large-scale machine learning. Deep Learn- ing models are used for achieving state-of-the-art results on a wide variety of tasks including com- puter vision, natural language processing and reinforcement learning; see (Bengio et al., 2016) and the references therein. The problem of training these networks is one of non-convex optimization. Mathematically, this can be represented as:
M . . 1 min f(@@) = 37 > fia), (1)
where fi is a loss function for data point i â {1, 2, · · · , M } which captures the deviation of the model prediction from the data, and x is the vector of weights being optimized. The process of optimizing this function is also called training of the network. Stochastic Gradient Descent (SGD) (Bottou, 1998; Sutskever et al., 2013) and its variants are often used for training deep networks.
âWork was performed when author was an intern at Intel Corporation
1
Published as a conference paper at ICLR 2017
These methods minimize the objective function f by iteratively taking steps of the form:
Ukp1 = Le â Ab (i > vas) ; (2) iC By, | 1609.04836#2 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 3 | θ = θ â η · âθJ(θ) (1)
As we need to calculate the gradients for the whole dataset to perform just one update, batch gradient descent can be very slow and is intractable for datasets that do not ï¬t in memory. Batch gradient descent also does not allow us to update our model online, i.e. with new examples on-the-ï¬y.
In code, batch gradient descent looks something like this:
for i in range ( nb_epochs ):
params_grad = evaluate_gradient ( loss_function , data , params ) params = params - learning_rate * params_grad
For a pre-deï¬ned number of epochs, we ï¬rst compute the gradient vector params_grad of the loss function for the whole dataset w.r.t. our parameter vector params. Note that state-of-the-art deep learning libraries provide automatic differentiation that efï¬ciently computes the gradient w.r.t. some parameters. If you derive the gradients yourself, then gradient checking is a good idea.6
We then update our parameters in the direction of the gradients with the learning rate determining how big of an update we perform. Batch gradient descent is guaranteed to converge to the global minimum for convex error surfaces and to a local minimum for non-convex surfaces. | 1609.04747#3 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 3 | These methods minimize the objective function f by iteratively taking steps of the form:
Ukp1 = Le â Ab (i > vas) ; (2) iC By,
where By, C {1,2,+-- , 1} is the batch sampled from the data set and ax, is the step size at iteration k. These methods can be interpreted as gradient descent using noisy gradients, which and are often referred to as mini-batch gradients with batch size |B;,|. SGD and its variants are employed in a small-batch regime, where |B,| < M and typically |B,| ⬠{32,64,--- ,512}. These configura- tions have been successfully used in practice for a large number of applications; see e.g. [2013). Many theoretical properties of these methods are known. These include guarantees of: (a) convergence to minimizers of strongly-convex functions and to stationary points for non-convex functions 2016), (b) saddle-point avoidance (Ge et al.|2015} [Lee et al.|[2016), and (c) robustness to input data (Hardt et al.] 2015). | 1609.04836#3 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 4 | # 2.2 Stochastic gradient descent
Stochastic gradient descent (SGD) in contrast performs a parameter update for each training example x(i) and label y(i):
θ = θ â η · âθJ(θ; x(i); y(i)) (2)
Batch gradient descent performs redundant computations for large datasets, as it recomputes gradients for similar examples before each parameter update. SGD does away with this redundancy by performing one update at a time. It is therefore usually much faster and can also be used to learn online. SGD performs frequent updates with a high variance that cause the objective function to ï¬uctuate heavily as in Figure 1. | 1609.04747#4 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 4 | Stochastic gradient methods have, however, a major drawback: owing to the sequential nature of the iteration and small batch sizes, there is limited avenue for parallelization. While some efforts have been made to parallelize SGD for Deep Learning (Dean et al., 2012; Das et al., 2016; Zhang et al., 2015), the speed-ups and scalability obtained are often limited by the small batch sizes. One natu- ral avenue for improving parallelism is to increase the batch size |Bk|. This increases the amount of computation per iteration, which can be effectively distributed. However, practitioners have ob- served that this leads to a loss in generalization performance; see e.g. (LeCun et al., 2012). In other words, the performance of the model on testing data sets is often worse when trained with large- batch methods as compared to small-batch methods. In our experiments, we have found the drop in generalization (also called generalization gap) to be as high as 5% even for smaller networks. | 1609.04836#4 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 5 | While batch gradient descent converges to the minimum of the basin the parameters are placed in, SGDâs ï¬uctuation, on the one hand, enables it to jump to new and potentially better local minima. On the other hand, this ultimately complicates convergence to the exact minimum, as SGD will keep overshooting. However, it has been shown that when we slowly decrease the learning rate, SGD shows the same convergence behaviour as batch gradient descent, almost certainly converging to a local or the global minimum for non-convex and convex optimization respectively. Its code fragment simply adds a loop over the training examples and evaluates the gradient w.r.t. each example. Note that we shufï¬e the training data at every epoch as explained in Section 6.1.
for i in range ( nb_epochs ): np . random . shuffle ( data ) for example in data : params_grad = evaluate_gradient ( loss_function , example , params ) params = params - learning_rate * params_grad
6Refer to http://cs231n.github.io/neural-networks-3/ for some great tips on how to check gradi- ents properly.
2
10 (7 a ee ec)
Figure 1: SGD ï¬uctuation (Source: Wikipedia)
# 2.3 Mini-batch gradient descent | 1609.04747#5 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 5 | In this paper, we present numerical results that shed light into this drawback of large-batch methods. We observe that the generalization gap is correlated with a marked sharpness of the minimizers obtained by large-batch methods. This motivates efforts at remedying the generalization problem, as a training algorithm that employs large batches without sacriï¬cing generalization performance would have the ability to scale to a much larger number of nodes than is possible today. This could potentially reduce the training time by orders-of-magnitude; we present an idealized performance model in the Appendix C to support this claim.
The paper is organized as follows. In the remainder of this section, we deï¬ne the notation used in this paper, and in Section 2 we present our main ï¬ndings and their supporting numerical evidence. In Section 3 we explore the performance of small-batch methods, and in Section 4 we brieï¬y discuss the relationship between our results and recent theoretical work. We conclude with open questions concerning the generalization gap, sharp minima, and possible modiï¬cations to make large-batch training viable. In Appendix E, we present some attempts to overcome the problems of large-batch training.
1.1 NOTATION | 1609.04836#5 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 6 | 2
10 (7 a ee ec)
Figure 1: SGD ï¬uctuation (Source: Wikipedia)
# 2.3 Mini-batch gradient descent
Mini-batch gradient descent ï¬nally takes the best of both worlds and performs an update for every mini-batch of n training examples:
θ = θ â η · âθJ(θ; x(i:i+n); y(i:i+n)) (3)
This way, it a) reduces the variance of the parameter updates, which can lead to more stable conver- gence; and b) can make use of highly optimized matrix optimizations common to state-of-the-art deep learning libraries that make computing the gradient w.r.t. a mini-batch very efï¬cient. Common mini-batch sizes range between 50 and 256, but can vary for different applications. Mini-batch gradient descent is typically the algorithm of choice when training a neural network and the term SGD usually is employed also when mini-batches are used. Note: In modiï¬cations of SGD in the rest of this post, we leave out the parameters x(i:i+n); y(i:i+n) for simplicity.
In code, instead of iterating over examples, we now iterate over mini-batches of size 50: | 1609.04747#6 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 6 | 1.1 NOTATION
We use the notation fi to denote the composition of loss function and a prediction function corre- sponding to the ith data point. The vector of weights is denoted by x and is subscripted by k to denote an iteration. We use the term small-batch (SB) method to denote SGD, or one of its variants like ADAM (Kingma & Ba, 2015) and ADAGRAD (Duchi et al., 2011), with the proviso that the gradient approximation is based on a small mini-batch. In our setup, the batch Bk is randomly sam- pled and its size is kept ï¬xed for every iteration. We use the term large-batch (LB) method to denote any training algorithm that uses a large mini-batch. In our experiments, ADAM is used to explore the behavior of both a small or a large batch method.
2 DRAWBACKS OF LARGE-BATCH METHODS
2.1 OUR MAIN OBSERVATION
As mentioned in Section 1, practitioners have observed a generalization gap when using large-batch methods for training deep learning models. Interestingly, this is despite the fact that large-batch methods usually yield a similar value of the training function as small-batch methods. One may put
2
Published as a conference paper at ICLR 2017 | 1609.04836#6 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 7 | In code, instead of iterating over examples, we now iterate over mini-batches of size 50:
for i in range ( nb_epochs ): np . random . shuffle ( data ) for batch in get_batches ( data , batch_size =50):
params_grad = evaluate_gradient ( loss_function , batch , params ) params = params - learning_rate * params_grad
# 3 Challenges
Vanilla mini-batch gradient descent, however, does not guarantee good convergence, but offers a few challenges that need to be addressed:
⢠Choosing a proper learning rate can be difï¬cult. A learning rate that is too small leads to painfully slow convergence, while a learning rate that is too large can hinder convergence and cause the loss function to ï¬uctuate around the minimum or even to diverge. | 1609.04747#7 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 7 | 2
Published as a conference paper at ICLR 2017
forth the following as possible causes for this phenomenon: (i) LB methods over-ï¬t the model; (ii) LB methods are attracted to saddle points; (iii) LB methods lack the explorative properties of SB methods and tend to zoom-in on the minimizer closest to the initial point; (iv) SB and LB methods converge to qualitatively different minimizers with differing generalization properties. The data presented in this paper supports the last two conjectures.
The main observation of this paper is as follows:
The lack of generalization ability is due to the fact that large-batch methods tend to converge to sharp minimizers of the training function. These minimizers are characterized by a signif- icant number of large positive eigenvalues in â2f (x), and tend to generalize less well. In contrast, small-batch methods converge to ï¬at minimizers characterized by having numerous small eigenvalues of â2f (x). We have observed that the loss function landscape of deep neural networks is such that large-batch methods are attracted to regions with sharp minimizers and that, unlike small-batch methods, are unable to escape basins of attraction of these minimizers. | 1609.04836#7 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 8 | Learning rate schedules [18] try to adjust the learning rate during training by e.g. annealing, i.e. reducing the learning rate according to a pre-deï¬ned schedule or when the change in objective between epochs falls below a threshold. These schedules and thresholds, however, have to be deï¬ned in advance and are thus unable to adapt to a datasetâs characteristics [4]. ⢠Additionally, the same learning rate applies to all parameter updates. If our data is sparse and our features have very different frequencies, we might not want to update all of them to the same extent, but perform a larger update for rarely occurring features.
⢠Another key challenge of minimizing highly non-convex error functions common for neural networks is avoiding getting trapped in their numerous suboptimal local minima. Dauphin et al. [5] argue that the difï¬culty arises in fact not from local minima but from saddle points, i.e. points where one dimension slopes up and another slopes down. These saddle points are usually surrounded by a plateau of the same error, which makes it notoriously hard for SGD to escape, as the gradient is close to zero in all dimensions.
3
# 4 Gradient descent optimization algorithms | 1609.04747#8 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 8 | The concept of sharp and ï¬at minimizers have been discussed in the statistics and machine learning literature. (Hochreiter & Schmidhuber, 1997) (informally) deï¬ne a ï¬at minimizer ¯x as one for which the function varies slowly in a relatively large neighborhood of ¯x. In contrast, a sharp minimizer Ëx is such that the function increases rapidly in a small neighborhood of Ëx. A ï¬at minimum can be de- scribed with low precision, whereas a sharp minimum requires high precision. The large sensitivity of the training function at a sharp minimizer negatively impacts the ability of the trained model to generalize on new data; see Figure 1 for a hypothetical illustration. This can be explained through the lens of the minimum description length (MDL) theory, which states that statistical models that require fewer bits to describe (i.e., are of low complexity) generalize better (Rissanen, 1983). Since ï¬at minimizers can be speciï¬ed with lower precision than to sharp minimizers, they tend to have bet- ter generalization performance. Alternative explanations are proffered through the Bayesian view of learning (MacKay, 1992), and through the lens of free Gibbs energy; see e.g. Chaudhari et al. (2016).
Training Function f (x) Flat Minimum Sharp Minimum | 1609.04836#8 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 9 | 3
# 4 Gradient descent optimization algorithms
In the following, we will outline some algorithms that are widely used by the Deep Learning community to deal with the aforementioned challenges. We will not discuss algorithms that are infeasible to compute in practice for high-dimensional data sets, e.g. second-order methods such as Newtonâs method7.
# 4.1 Momentum
SGD has trouble navigating ravines, i.e. areas where the surface curves much more steeply in one dimension than in another [20], which are common around local optima. In these scenarios, SGD oscillates across the slopes of the ravine while only making hesitant progress along the bottom towards the local optimum as in Figure 2a. | 1609.04747#9 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 9 | Training Function f (x) Flat Minimum Sharp Minimum
# Testing Function
Figure 1: A Conceptual Sketch of Flat and Sharp Minima. The Y-axis indicates value of the loss function and the X-axis the variables (parameters)
2.2 NUMERICAL EXPERIMENTS
In this section, we present numerical results to support the observations made above. To this end, we make use of the visualization technique employed by (Goodfellow et al., 2014b) and a proposed heuristic metric of sharpness (Equation (4)). We consider 6 multi-class classiï¬cation network con- ï¬gurations for our experiments; they are described in Table 1. The details about the data sets and network conï¬gurations are presented in Appendices A and B respectively. As is common for such problems, we use the mean cross entropy loss as the objective function f .
The networks were chosen to exemplify popular conï¬gurations used in practice like AlexNet (Krizhevsky et al., 2012) and VGGNet (Simonyan & Zisserman, 2014). Results on other networks
3
Published as a conference paper at ICLR 2017
Table 1: Network Conï¬gurations | 1609.04836#9 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 10 | â¬C
(a) SGD without momentum (b) SGD with momentum
Figure 2: Source: Genevieve B. Orr
Momentum [17] is a method that helps accelerate SGD in the relevant direction and dampens oscillations as can be seen in Figure 2b. It does this by adding a fraction γ of the update vector of the past time step to the current update vector8
vt = γvtâ1 + ηâθJ(θ) θ = θ â vt (4)
The momentum term γ is usually set to 0.9 or a similar value.
Essentially, when using momentum, we push a ball down a hill. The ball accumulates momentum as it rolls downhill, becoming faster and faster on the way (until it reaches its terminal velocity, if there is air resistance, i.e. γ < 1). The same thing happens to our parameter updates: The momentum term increases for dimensions whose gradients point in the same directions and reduces updates for dimensions whose gradients change directions. As a result, we gain faster convergence and reduced oscillation.
# 4.2 Nesterov accelerated gradient
However, a ball that rolls down a hill, blindly following the slope, is highly unsatisfactory. We would like to have a smarter ball, a ball that has a notion of where it is going so that it knows to slow down before the hill slopes up again. | 1609.04747#10 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 10 | 3
Published as a conference paper at ICLR 2017
Table 1: Network Conï¬gurations
Name Network Type F1 F2 C1 C2 C3 C4 Fully Connected Fully Connected (Shallow) Convolutional (Deep) Convolutional (Shallow) Convolutional (Deep) Convolutional Architecture Data set Section B.1 MNIST (LeCun et al., 1998a) TIMIT (Garofolo et al., 1993) Section B.2 CIFAR-10 (Krizhevsky & Hinton, 2009) Section B.3 CIFAR-10 Section B.4 CIFAR-100 (Krizhevsky & Hinton, 2009) Section B.3 CIFAR-100 Section B.4
and using other initialization strategies, activation functions, and data sets showed similar behavior. Since the goal of our work is not to achieve state-of-the-art accuracy or time-to-solution on these tasks but rather to characterize the nature of the minima for LB and SB methods, we only describe the ï¬nal testing accuracy in the main paper and ignore convergence trends. | 1609.04836#10 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 11 | Nesterov accelerated gradient (NAG) [14] is a way to give our momentum term this kind of prescience. We know that we will use our momentum term γ vtâ1 to move the parameters θ. Computing θ âγ vtâ1 thus gives us an approximation of the next position of the parameters (the gradient is missing for the full update), a rough idea where our parameters are going to be. We can now effectively look ahead by calculating the gradient not w.r.t. to our current parameters θ but w.r.t. the approximate future position of our parameters:
vt = γ vtâ1 + ηâθJ(θ â γvtâ1) θ = θ â vt (5)
7https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization 8Some implementations exchange the signs in the equations.
4 | 1609.04747#11 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 11 | For all experiments, we used 10% of the training data as batch size for the large-batch experiments and 256 data points for small-batch experiments. We used the ADAM optimizer for both regimes. Experiments with other optimizers for the large-batch experiments, including ADAGRAD (Duchi et al., 2011), SGD (Sutskever et al., 2013) and adaQN (Keskar & Berahas, 2016), led to similar results. All experiments were conducted 5 times from different (uniformly distributed random) starting points and we report both mean and standard-deviation of measured quantities. The baseline performance for our setup is presented Table 2. From this, we can observe that on all networks, both approaches led to high training accuracy but there is a signiï¬cant difference in the generalization performance. The networks were trained, without any budget or limits, until the loss function ceased to improve.
Table 2: Performance of small-batch (SB) and large-batch (LB) variants of ADAM on the 6 networks listed in Table 1 | 1609.04836#11 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 12 | Figure 3: Nesterov update (Source: G. Hintonâs lecture 6c)
Again, we set the momentum term γ to a value of around 0.9. While Momentum ï¬rst computes the current gradient (small blue vector in Figure 3) and then takes a big jump in the direction of the updated accumulated gradient (big blue vector), NAG ï¬rst makes a big jump in the direction of the previous accumulated gradient (brown vector), measures the gradient and then makes a correction (green vector). This anticipatory update prevents us from going too fast and results in increased responsiveness, which has signiï¬cantly increased the performance of RNNs on a number of tasks [2].9
Now that we are able to adapt our updates to the slope of our error function and speed up SGD in turn, we would also like to adapt our updates to each individual parameter to perform larger or smaller updates depending on their importance.
# 4.3 Adagrad | 1609.04747#12 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 12 | Table 2: Performance of small-batch (SB) and large-batch (LB) variants of ADAM on the 6 networks listed in Table 1
Training Accuracy Testing Accuracy Name F1 F2 C1 C2 C3 C4 SB 99.66% ± 0.05% 99.92% ± 0.01% 98.03% ± 0.07% 97.81% ± 0.07% 99.99% ± 0.03% 98.35% ± 2.08% 64.02% ± 0.2% 59.45% ± 1.05% 99.89% ± 0.02% 99.66% ± 0.2% 80.04% ± 0.12% 77.26% ± 0.42% 99.99% ± 0.04% 99.99% ± 0.01% 89.24% ± 0.12% 87.26% ± 0.07% 99.56% ± 0.44% 99.88% ± 0.30% 49.58% ± 0.39% 46.45% ± 0.43% 57.81% ± 0.17% 99.10% ± 1.23% 99.57% ± 1.84% 63.08% ± 0.5% LB SB LB | 1609.04836#12 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 13 | # 4.3 Adagrad
Adagrad [8] is an algorithm for gradient-based optimization that does just this: It adapts the learning rate to the parameters, performing larger updates for infrequent and smaller updates for frequent parameters. For this reason, it is well-suited for dealing with sparse data. Dean et al. [6] have found that Adagrad greatly improved the robustness of SGD and used it for training large-scale neural nets at Google, which â among other things â learned to recognize cats in Youtube videos10. Moreover, Pennington et al. [16] used Adagrad to train GloVe word embeddings, as infrequent words require much larger updates than frequent ones.
Previously, we performed an update for all parameters θ at once as every parameter θi used the same learning rate η. As Adagrad uses a different learning rate for every parameter θi at every time step t, we ï¬rst show Adagradâs per-parameter update, which we then vectorize. For brevity, we set gt,i to be the gradient of the objective function w.r.t. to the parameter θi at time step t:
gt,i = âθtJ(θt,i) (6)
The SGD update for every parameter θi at each time step t then becomes: | 1609.04747#13 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 13 | We emphasize that the generalization gap is not due to over-ï¬tting or over-training as commonly observed in statistics. This phenomenon manifest themselves in the form of a testing accuracy curve that, at a certain iterate peaks, and then decays due to the model learning idiosyncrasies of the training data. This is not what we observe in our experiments; see Figure 2 for the trainingâtesting curve of the F2 and C1 networks, which are representative of the rest. As such, early-stopping heuristics aimed at preventing models from over-ï¬tting would not help reduce the generalization gap. The difference between the training and testing accuracies for the networks is due to the speciï¬c choice of the network (e.g. AlexNet, VGGNet etc.) and is not the focus of this study. Rather, our goal is to study the source of the testing performance disparity of the two regimes, SB and LB, on a given network model.
# 2.2.1 PARAMETRIC PLOTS | 1609.04836#13 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 14 | gt,i = âθtJ(θt,i) (6)
The SGD update for every parameter θi at each time step t then becomes:
θt+1,i = θt,i â η · gt,i (7)
In its update rule, Adagrad modiï¬es the general learning rate η at each time step t for every parameter θi based on the past gradients that have been computed for θi:
n VGin be Iti (8) Oi = O14 ~~
G, ⬠R® here is a diagonal matrix where each diagonal element i, i is the sum of the squares of the gradients w.r.t. 9; up to time step while ⬠is a smoothing term that avoids division by zero (usually on the order of le â 8). Interestingly, without the square root operation, the algorithm performs much worse.
9Refer to http://cs231n.github.io/neural-networks-3/ for another explanation of the intuitions behind NAG, while Ilya Sutskever gives a more detailed overview in his PhD thesis [19]. | 1609.04747#14 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 14 | # 2.2.1 PARAMETRIC PLOTS
We first present parametric 1-D plots of the function as described in (Goodfellow et al.|/2014b). Let «3 and x7 indicate the solutions obtained by running ADAM using small and large batch sizes respectively. We plot the loss function, on both training and testing data sets, along a line-segment containing the two points. Specifically, for a ⬠[â1, 2], we plot the function f (ax + (1 â a)x?) and also superimpose the classification accuracy at the intermediate points; see hiewel For this
1The code to reproduce the parametric plot on exemplary networks can be found in our GitHub repository: https://github.com/keskarnitish/large-batch-training.
4
Published as a conference paper at ICLR 2017
# Accuracy
(a) Network F2 (b) Network C1
Figure 2: Training and testing accuracy for SB and LB methods as a function of epochs. | 1609.04836#14 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 15 | 10http://www.wired.com/2012/06/google-x-neural-network/ 11Duchi et al. [8] give this matrix as an alternative to the full matrix containing the outer products of all previous gradients, as the computation of the matrix square root is infeasible even for a moderate number of parameters d.
5
As G; contains the sum of the squares of the past gradients w.r.t. to all parameters 0 along its diagonal, we can now vectorize our implementation by performing an element-wise matrix-vector multiplication © between G;, and g:
O41 = 8 - Wee © o- 9)
One of Adagradâs main beneï¬ts is that it eliminates the need to manually tune the learning rate. Most implementations use a default value of 0.01 and leave it at that.
Adagradâs main weakness is its accumulation of the squared gradients in the denominator: Since every added term is positive, the accumulated sum keeps growing during training. This in turn causes the learning rate to shrink and eventually become inï¬nitesimally small, at which point the algorithm is no longer able to acquire additional knowledge. The following algorithms aim to resolve this ï¬aw.
# 4.4 Adadelta | 1609.04747#15 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 15 | # Accuracy
(a) Network F2 (b) Network C1
Figure 2: Training and testing accuracy for SB and LB methods as a function of epochs.
experiment, we randomly chose a pair of SB and LB minimizers from the 5 trials used to generate the data in Table|2] The plots show that the LB minima are strikingly sharper than the SB minima in this one-dimensional manifold. The plots in Figure[3]only explore a linear slice of the function, but in Figure{7]in Appendix|D] we plot f(sin(S#)«% + cos($#)a*) to monitor the function along a curved path between the two minimizers . There too, the relative sharpness of the minima is evident.
2.2.2 SHARPNESS OF MINIMA | 1609.04836#15 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 16 | # 4.4 Adadelta
Adadelta [22] is an extension of Adagrad that seeks to reduce its aggressive, monotonically decreasing learning rate. Instead of accumulating all past squared gradients, Adadelta restricts the window of accumulated past gradients to some ï¬xed size w.
Instead of inefï¬ciently storing w previous squared gradients, the sum of gradients is recursively deï¬ned as a decaying average of all past squared gradients. The running average E[g2]t at time step t then depends (as a fraction γ similarly to the Momentum term) only on the previous average and the current gradient:
E[g2]t = γE[g2]tâ1 + (1 â γ)g2 t (10)
We set γ to a similar value as the momentum term, around 0.9. For clarity, we now rewrite our vanilla SGD update in terms of the parameter update vector âθt:
âθt = âη · gt,i θt+1 = θt + âθt (11)
The parameter update vector of Adagrad that we derived previously thus takes the form:
AG, = -â_ om (12) VGi +e | 1609.04747#16 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 16 | 2.2.2 SHARPNESS OF MINIMA
So far, we have used the term sharp minimizer loosely, but we noted that this concept has received attention in the literature (Hochreiter & Schmidhuber, 1997). Sharpness of a minimizer can be characterized by the magnitude of the eigenvalues of â2f (x), but given the prohibitive cost of this computation in deep learning applications, we employ a sensitivity measure that, although imperfect, is computationally feasible, even for large networks. It is based on exploring a small neighborhood of a solution and computing the largest value that the function f can attain in that neighborhood. We use that value to measure the sensitivity of the training function at the given local minimizer. Now, since the maximization process is not accurate, and to avoid being mislead by the case when a large value of f is attained only in a tiny subspace of Rn, we perform the maximization both in the entire space Rn as well as in random manifolds. For that purpose, we introduce an n à p matrix A, whose columns are randomly generated. Here p determines the dimension of the manifold, which in our experiments is chosen as p = 100. | 1609.04836#16 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 17 | The parameter update vector of Adagrad that we derived previously thus takes the form:
AG, = -â_ om (12) VGi +e
We now simply replace the diagonal matrix Gt with the decaying average over past squared gradients E[g2]t:
Ad, = -ââ_ (13) VE +e
As the denominator is just the root mean squared (RMS) error criterion of the gradient, we can replace it with the criterion short-hand:
âθt = â η RM S[g]t gt (14)
The authors note that the units in this update (as well as in SGD, Momentum, or Adagrad) do not match, i.e. the update should have the same hypothetical units as the parameter. To realize this, they ï¬rst deï¬ne another exponentially decaying average, this time not of squared gradients but of squared parameter updates:
E[âθ2]t = γE[âθ2]tâ1 + (1 â γ)âθ2 t (15)
6
The root mean squared error of parameter updates is thus:
RMS[AO], = V/E[A0?], + â¬
(16) | 1609.04747#17 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 17 | Specifically, let C. denote a box around the solution over which the maximization of f is performed, and let A ⬠Râ*? be the matrix defined above. In order to ensure invariance of sharpness to problem dimension and sparsity, we define the constraint set C, as:
Ce = {2 © RP: -e(\(Ata);| 41) < zi <e([(ATx)i] +1) Vie {1,2,--- ,p}}, (3) where At denotes the pseudo-inverse of A. Thus ⬠controls the size of the box. We can now define our measure of sharpness (or sensitivity). Metric 2.1. Given x ⬠R", ⬠> 0. and A ⬠R"*?, we define the (C., A)-sharpness of f at x as:
(maxyec, f(« + Ay)) â f(x) 1+ f(x) Oxf (â¬, A) : x 100. (4) | 1609.04836#17 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 18 | 6
The root mean squared error of parameter updates is thus:
RMS[AO], = V/E[A0?], + â¬
(16)
Since RM S[âθ]t is unknown, we approximate it with the RMS of parameter updates until the previous time step. Replacing the learning rate η in the previous update rule with RM S[âθ]tâ1 ï¬nally yields the Adadelta update rule:
âθt = â RM S[âθ]tâ1 RM S[g]t gt θt+1 = θt + âθt (17)
With Adadelta, we do not even need to set a default learning rate, as it has been eliminated from the update rule.
# 4.5 RMSprop
RMSprop is an unpublished, adaptive learning rate method proposed by Geoff Hinton in Lecture 6e of his Coursera Class12.
RMSprop and Adadelta have both been developed independently around the same time stemming from the need to resolve Adagradâs radically diminishing learning rates. RMSprop in fact is identical to the ï¬rst update vector of Adadelta that we derived above:
Elg?|e = 0.9E |g? |e-1 + 0-197 0 (18) JE +e Orsi = 01 â | 1609.04747#18 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 18 | (maxyec, f(« + Ay)) â f(x) 1+ f(x) Oxf (â¬, A) : x 100. (4)
Unless speciï¬ed otherwise, we use this metric for sharpness for the rest of the paper; if A is not spec- iï¬ed, it is assumed to be the identity matrix, In. (We note in passing that, in the convex optimization literature, the term sharp minimum has a different deï¬nition (Ferris, 1988), but that concept is not useful for our purposes.)
In Tables[3]and|4| we present the values of the sharpness metric (4) for the minimizers of the various problems. Table[3]explores the full-space (i.e., A = I) whereas Table[4]uses a randomly sampled n x 100 dimensional matrix A. We report results with two values of â¬, (107°,5 - 1074). In all experiments, we solve the maximization problem in Equation (7) inexactly by applying 10 iterations of L-BFGS-B . This limit on the number of iterations was necessitated by the
5
Published as a conference paper at ICLR 2017
(a) F1 (b) F2 (c) C1 (d) C2 (e) C3 (f) C4 | 1609.04836#18 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 19 | Elg?|e = 0.9E |g? |e-1 + 0-197 0 (18) JE +e Orsi = 01 â
RMSprop as well divides the learning rate by an exponentially decaying average of squared gradients. Hinton suggests γ to be set to 0.9, while a good default value for the learning rate η is 0.001.
# 4.6 Adam
Adaptive Moment Estimation (Adam) [10] is another method that computes adaptive learning rates for each parameter. In addition to storing an exponentially decaying average of past squared gradients vt like Adadelta and RMSprop, Adam also keeps an exponentially decaying average of past gradients mt, similar to momentum:
mt = β1mtâ1 + (1 â β1)gt vt = β2vtâ1 + (1 â β2)g2 t (19)
mt and vt are estimates of the ï¬rst moment (the mean) and the second moment (the uncentered variance) of the gradients respectively, hence the name of the method. As mt and vt are initialized as vectors of 0âs, the authors of Adam observe that they are biased towards zero, especially during the initial time steps, and especially when the decay rates are small (i.e. β1 and β2 are close to 1). | 1609.04747#19 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04747 | 20 | They counteract these biases by computing bias-corrected ï¬rst and second moment estimates:
Ëmt = Ëvt = mt 1 â βt 1 vt 1 â βt 2 (20)
# 12http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf
7
They then use these to update the parameters just as we have seen in Adadelta and RMSprop, which yields the Adam update rule:
1 Vite O41 =O â mu (21)
The authors propose default values of 0.9 for £1, 0.999 for 82, and 10~® for e. They show empiri- cally that Adam works well in practice and compares favorably to other adaptive learning-method algorithms.
# 4.7 AdaMax
The v, factor in the Adam update rule scales the gradient inversely proportionally to the £2 norm of the past gradients (via the v,_1 term) and current gradient |g;|?:
vt = β2vtâ1 + (1 â β2)|gt|2 (22)
We can generalize this update to the ¢, norm. Note that Kingma and Ba also parameterize 82 as 93:
vt = βp 2 vtâ1 + (1 â βp 2 )|gt|p (23) | 1609.04747#20 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 20 | large cost of evaluating the true objective f. Both tables show a 1-2 order-of-magnitude difference between the values of our metric for the SB and LB regimes. These results reinforce the view that the solutions obtained by a large-batch method defines points of larger sensitivity of the training function. In Appedix [E] we describe approaches to attempt to remedy this generalization problem of LB methods. These approaches include data augmentation, conservative training and adversarial training. Our preliminary findings show that these approaches help reduce the generalization gap but still lead to relatively sharp minimizers and as such, do not completely remedy the problem. Note that Metric 2.1 is closely related to the spectrum of V? f(a). Assuming ⬠to be small enough, when A = [,,, the value (a) relates to the largest eigenvalue of V? f(a) and when A is randomly sampled it approximates the Ritz value of V? f(a) projected onto the column-space of A.
6
Published as a conference paper at ICLR 2017
Table 3: Sharpness of Minima in Full Space; ⬠is defined in (3). | 1609.04836#20 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 21 | vt = βp 2 vtâ1 + (1 â βp 2 )|gt|p (23)
Norms for large p values generally become numerically unstable, which is why ¢; and 2 norms are most common in practice. However, ¢,. also generally exhibits stable behavior. For this reason, the authors propose AdaMax and show that v, with 0. converges to the following more stable value. To avoid confusion with Adam, we use u, to denote the infinity norm-constrained v;:
# ut = βâ
2 vtâ1 + (1 â βâ = max(β2 · vtâ1, |gt|) 2 )|gt|â (24)
â
We can now plug this into the Adam update equation by replacing \/0; + ⬠with u, to obtain the AdaMax update rule:
θt+1 = θt â η ut Ëmt (25)
Note that as ut relies on the max operation, it is not as suggestible to bias towards zero as mt and vt in Adam, which is why we do not need to compute a bias correction for ut. Good default values are again η = 0.002, β1 = 0.9, and β2 = 0.999.
# 4.8 Nadam | 1609.04747#21 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 21 | 6
Published as a conference paper at ICLR 2017
Table 3: Sharpness of Minima in Full Space; ⬠is defined in (3).
e=10°° e=5-10-4 | LB LB Fi 205.14 £ 69.52 0.27 | 42.90 £17.14 Fy 310.64 + 38.46 0.05 | 93.15 +6.81 Cy 707.23 + 43.04 0.88 | 227.31 + 23.23 Co 925.32 + 38.29 0.86 | 175.31 + 18.28 C3 258.75 + 8.96 0.99 | 105.11 + 13.22 C4 421.84 + 36.97 + 0.87 | 109.35 + 16.57
Table 4: Sharpness of Minima in Random Subspaces of Dimension 100 | 1609.04836#21 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 22 | # 4.8 Nadam
As we have seen before, Adam can be viewed as a combination of RMSprop and momentum: RM- Sprop contributes the exponentially decaying average of past squared gradients vt, while momentum accounts for the exponentially decaying average of past gradients mt. We have also seen that Nesterov accelerated gradient (NAG) is superior to vanilla momentum.
Nadam (Nesterov-accelerated Adaptive Moment Estimation) [7] thus combines Adam and NAG. In order to incorporate NAG into Adam, we need to modify its momentum term mt.
First, let us recall the momentum update rule using our current notation :
gt = âθtJ(θt) mt = γmtâ1 + ηgt (26) θt+1 = θt â mt
8
where J is our objective function, γ is the momentum decay term, and η is our step size. Expanding the third equation above yields:
θt+1 = θt â (γmtâ1 + ηgt) (27)
This demonstrates again that momentum involves taking a step in the direction of the previous momentum vector and a step in the direction of the current gradient. | 1609.04747#22 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 22 | Table 4: Sharpness of Minima in Random Subspaces of Dimension 100
«= 107% e=5- LB SB Fy r 0.00 9.22 + 0.56 0.05 + 0.00 £0.14 Fy 0.02 23.63 0.05 + 0.00 0.19 Cy 0.23 | 137.25 0.71 £0.15 7.48 C2 £0.34 25.09 0.31 + 0.08 0.52 C3 2.20 | 236.03 4.03 + 1.45 27.39 Cy | 6.05 £1.13 72.99 + 10.96 | 1.89+0.33 | 19.85 + 4.12
We conclude this section by noting that the sharp minimizers identiï¬ed in our experiments do not resemble a cone, i.e., the function does not increase rapidly along all (or even most) directions. By sampling the loss function in a neighborhood of LB solutions, we observe that it rises steeply only along a small dimensional subspace (e.g. 5% of the whole space); on most other directions, the function is relatively ï¬at.
# 3 SUCCESS OF SMALL-BATCH METHODS | 1609.04836#22 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 23 | This demonstrates again that momentum involves taking a step in the direction of the previous momentum vector and a step in the direction of the current gradient.
NAG then allows us to perform a more accurate step in the gradient direction by updating the parameters with the momentum step before computing the gradient. We thus only need to modify the gradient gt to arrive at NAG:
gt = âθtJ(θt â γmtâ1) mt = γmtâ1 + ηgt θt+1 = θt â mt (28)
Dozat proposes to modify NAG the following way: Rather than applying the momentum step twice â one time for updating the gradient gt and a second time for updating the parameters θt+1 â we now apply the look-ahead momentum vector directly to update the current parameters:
gt = âθtJ(θt) mt = γmtâ1 + ηgt θt+1 = θt â (γmt + ηgt) (29) | 1609.04747#23 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 23 | # 3 SUCCESS OF SMALL-BATCH METHODS
It is often reported that when increasing the batch size for a problem, there exists a threshold after which there is a deterioration in the quality of the model. This behavior can be observed for the F2 and C1 networks in Figure 4. In both of these experiments, there is a batch size (â 15000 for F2 and â 500 for C1) after which there is a large drop in testing accuracy. Notice also that the upward drift in value of the sharpness is considerably reduced around this threshold. Similar thresholds exist for the other networks in Table 1.
Let us now consider the behavior of SB methods, which use noisy gradients in the step computation. From the results reported in the previous section, it appears that noise in the gradient pushes the iterates out of the basin of attraction of sharp minimizers and encourages movement towards a ï¬atter minimizer where noise will not cause exit from that basin. When the batch size is greater than the threshold mentioned above, the noise in the stochastic gradient is not sufï¬cient to cause ejection from the initial basin leading to convergence to sharper a minimizer. | 1609.04836#23 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 24 | Notice that rather than utilizing the previous momentum vector mtâ1 as in Equation 27, we now use the current momentum vector mt to look ahead. In order to add Nesterov momentum to Adam, we can thus similarly replace the previous momentum vector with the current momentum vector. First, recall that the Adam update rule is the following (note that we do not need to modify Ëvt):
mz, = Bymy-1 + (1 â 81) ge » me ms Tw (30) 7) n Ory = 0; - t+1 t Vii + le
Expanding the second equation with the deï¬nitions of Ëmt and mt in turn gives us:
0, ui Bye ie = Pr)g t 31 Vite 1-fs * 1-8 G1) 141
Note that β1mtâ1 1âβt 1 step. We can thus replace it with Ëmtâ1: is just the bias-corrected estimate of the momentum vector of the previous time
n ~ , = fi)ge _ Tee Fe t 1-5 Our =O ) (32) | 1609.04747#24 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 24 | To explore that in more detail, consider the following experiment. We train the network for 100 epochs using ADAM with a batch size of 256, and retain the iterate after each epoch in memory. Using these 100 iterates as starting points we train the network using a LB method for 100 epochs and receive a 100 piggybacked (or warm-started) large-batch solutions. We plot in Figure 5 the testing accuracy and sharpness of these large-batch solutions, along with the testing accuracy of the small-batch iterates. Note that when warm-started with only a few initial epochs, the LB method does not yield a generalization improvement. The concomitant sharpness of the iterates also stays high. On the other hand, after certain number of epochs of warm-starting, the accuracy improves and sharpness of the large-batch iterates drop. This happens, apparently, when the SB method has ended its exploration phase and discovered a ï¬at minimizer; the LB method is then able to converge towards it, leading to good testing accuracy.
It has been speculated that LB methods tend to be attracted to minimizers close to the starting point x0, whereas SB methods move away and locate minimizers that are farther away. Our numerical
7
Published as a conference paper at ICLR 2017
(a) F2 (b) C1 | 1609.04836#24 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 25 | n ~ , = fi)ge _ Tee Fe t 1-5 Our =O ) (32)
This equation looks very similar to our expanded momentum term in Equation 27. We can now add Nesterov momentum just as we did in Equation 29 by simply replacing this bias-corrected estimate of the momentum vector of the previous time step Ëmtâ1 with the bias-corrected estimate of the current momentum vector Ëmt, which gives us the Nadam update rule:
n . , = 6i)ge Tape + Tao) O41 = 01 â (33)
9
# 4.9 Visualization of algorithms
The following two ï¬gures provide some intuitions towards the optimization behaviour of the presented optimization algorithms.13
In Figure 4a, we see the path they took on the contours of a loss surface (the Beale function). All started at the same point and took different paths to reach the minimum. Note that Adagrad, Adadelta, and RMSprop headed off immediately in the right direction and converged similarly fast, while Momentum and NAG were led off-track, evoking the image of a ball rolling down the hill. NAG, however, was able to correct its course sooner due to its increased responsiveness by looking ahead and headed to the minimum. | 1609.04747#25 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 25 | 7
Published as a conference paper at ICLR 2017
(a) F2 (b) C1
Figure 4: Testing Accuracy and Sharpness v/s Batch Size. The X-axis corresponds to the batch size used for training the network for 100 epochs, left Y-axis corresponds to the testing accuracy at the final iterate and right Y-axis corresponds to the sharpness of that iterate. We report sharpness for two values of â¬: 1073 and 5 - 1074.
(a) F2 (b) C1
Figure 5: Warm-starting experiments. The upper ï¬gures report the testing accuracy of the SB method (blue line) and the testing accuracy of the warm started (piggybacked) LB method (red line), as a function of the number of epochs of the SB method. The lower ï¬gures plot the sharpness mea- sure (4) for the solutions obtained by the piggybacked LB method v/s the number of warm-starting epochs of the SB method.
8
# Sharpness
Published as a conference paper at ICLR 2017
(a) F2 (b) C1
# 8 av? é
Figure 6: Sharpness v/s Cross Entropy Loss for SB and LB methods. | 1609.04836#25 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 26 | Figure 4b shows the behaviour of the algorithms at a saddle point, i.e. a point where one dimension has a positive slope, while the other dimension has a negative slope, which pose a difï¬culty for SGD as we mentioned before. Notice here that SGD, Momentum, and NAG ï¬nd it difï¬culty to break symmetry, although the latter two eventually manage to escape the saddle point, while Adagrad, RMSprop, and Adadelta quickly head down the negative slope, with Adadelta leading the charge.
SGD Momentum NAG Adagrad Adadelta SGD Momentum NAG Adagrad Adadelta (a) SGD optimization on loss surface contours (b) SGD optimization on saddle point
SGD Momentum NAG Adagrad Adadelta
SGD Momentum NAG Adagrad Adadelta
# (a) SGD optimization on loss surface contours
# (b) SGD optimization on saddle point
Figure 4: Source and full animations: Alec Radford
As we can see, the adaptive learning-rate methods, i.e. Adagrad, Adadelta, RMSprop, and Adam are most suitable and provide the best convergence for these scenarios.
# 4.10 Which optimizer to use? | 1609.04747#26 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 26 | (a) F2 (b) C1
# 8 av? é
Figure 6: Sharpness v/s Cross Entropy Loss for SB and LB methods.
experiments support this view: we observed that the ratio of ||a* â a||2 and ||2% â x||2 was in the range of 3-10.
In order to further illustrate the qualitative difference between the solutions obtained by SB and LB methods, we plot in Figure 6 our sharpness measure (4) against the loss function (cross entropy) for one random trial of the F2 and C1 networks. For larger values of the loss function, i.e., near the initial point, SB and LB method yield similar values of sharpness. As the loss function reduces, the sharpness of the iterates corresponding to the LB method rapidly increases, whereas for the SB method the sharpness stays relatively constant initially and then reduces, suggesting an exploration phase followed by convergence to a ï¬at minimizer.
# 4 DISCUSSION AND CONCLUSION | 1609.04836#26 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 27 | # 4.10 Which optimizer to use?
So, which optimizer should you use? If your input data is sparse, then you likely achieve the best results using one of the adaptive learning-rate methods. An additional beneï¬t is that you will not need to tune the learning rate but will likely achieve the best results with the default value.
In summary, RMSprop is an extension of Adagrad that deals with its radically diminishing learning rates. It is identical to Adadelta, except that Adadelta uses the RMS of parameter updates in the numerator update rule. Adam, ï¬nally, adds bias-correction and momentum to RMSprop. Insofar, RMSprop, Adadelta, and Adam are very similar algorithms that do well in similar circumstances. Kingma et al. [10] show that its bias-correction helps Adam slightly outperform RMSprop towards the end of optimization as gradients become sparser. Insofar, Adam might be the best overall choice. | 1609.04747#27 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 27 | # 4 DISCUSSION AND CONCLUSION
In this paper, we present numerical experiments that support the view that convergence to sharp minimizers gives rise to the poor generalization of large-batch methods for deep learning. To this end, we provide one-dimensional parametric plots and perturbation (sharpness) measures for a vari- ety of deep learning architectures. In Appendix E, we describe our attempts to remedy the problem, including data augmentation, conservative training and robust optimization. Our preliminary inves- tigation suggests that these strategies do not correct the problem; they improve the generalization of large-batch methods but still lead to relatively sharp minima. Another prospective remedy includes the use of dynamic sampling where the batch size is increased gradually as the iteration progresses (Byrd et al., 2012; Friedlander & Schmidt, 2012). The potential viability of this approach is sug- gested by our warm-starting experiments (see Figure 5) wherein high testing accuracy is achieved using a large-batch method that is warm-start with a small-batch method. | 1609.04836#27 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 28 | Interestingly, many recent papers use vanilla SGD without momentum and a simple learning rate annealing schedule. As has been shown, SGD usually achieves to ï¬nd a minimum, but it might take signiï¬cantly longer than with some of the optimizers, is much more reliant on a robust initialization and annealing schedule, and may get stuck in saddle points rather than local minima. Consequently, if you care about fast convergence and train a deep or complex neural network, you should choose one of the adaptive learning rate methods.
13Also have a look at http://cs231n.github.io/neural-networks-3/ for a description of the same images by Karpathy and another concise overview of the algorithms discussed.
10
# 5 Parallelizing and distributing SGD | 1609.04747#28 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 28 | Recently, a number of researchers have described interesting theoretical properties of the loss sur- face of deep neural networks; see e.g. (Choromanska et al., 2015; Soudry & Carmon, 2016; Lee et al., 2016). Their work shows that, under certain regularity assumptions, the loss function of deep learning models is fraught with many local minimizers and that many of these minimizers corre- spond to a similar loss function value. Our results are in alignment these observations since, in our experiments, both sharp and ï¬at minimizers have very similar loss function values. We do not know, however, if the theoretical models mentioned above provide information about the existence and density of sharp minimizers of the loss surface.
Our results suggest some questions: (a) can one prove that large-batch (LB) methods typically con- verge to sharp minimizers of deep learning training functions? (In this paper, we only provided some numerical evidence.); (b) what is the relative density of the two kinds of minima?; (c) can one design neural network architectures for various tasks that are suitable to the properties of LB methods?; (d) can the networks be initialized in a way that enables LB methods to succeed?; (e) is it possible, through algorithmic or regulatory means to steer LB methods away from sharp minimizers?
9
Published as a conference paper at ICLR 2017 | 1609.04836#28 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 29 | 10
# 5 Parallelizing and distributing SGD
Given the ubiquity of large-scale data solutions and the availability of low-commodity clusters, distributing SGD to speed it up further is an obvious choice. SGD by itself is inherently sequential: Step-by-step, we progress further towards the minimum. Running it provides good convergence but can be slow particularly on large datasets. In contrast, running SGD asynchronously is faster, but suboptimal communication between workers can lead to poor convergence. Additionally, we can also parallelize SGD on one machine without the need for a large computing cluster. The following are algorithms and architectures that have been proposed to optimize parallelized and distributed SGD.
# 5.1 Hogwild!
Niu et al. [15] introduce an update scheme called Hogwild! that allows performing SGD updates in parallel on CPUs. Processors are allowed to access shared memory without locking the parameters. This only works if the input data is sparse, as each update will only modify a fraction of all parameters. They show that in this case, the update scheme achieves almost an optimal rate of convergence, as it is unlikely that processors will overwrite useful information.
# 5.2 Downpour SGD | 1609.04747#29 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 29 | 9
Published as a conference paper at ICLR 2017
# REFERENCES
Yoshua Bengio, Ian Goodfellow, and Aaron Courville. Deep learning. Book in preparation for MIT Press, 2016. URL http://www.deeplearningbook.org.
Dimitris Bertsimas, Omid Nohadani, and Kwong Meng Teo. Robust optimization for unconstrained simulation-based problems. Operations Research, 58(1):161â178, 2010.
L´eon Bottou. Online learning and stochastic approximations. On-line learning in neural networks, 17(9):142, 1998.
L´eon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. arXiv preprint arXiv:1606.04838, 2016.
Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientiï¬c Computing, 16(5):1190â1208, 1995.
Richard H Byrd, Gillian M Chin, Jorge Nocedal, and Yuchen Wu. Sample size selection in opti- mization methods for machine learning. Mathematical programming, 134(1):127â155, 2012. | 1609.04836#29 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 30 | # 5.2 Downpour SGD
Downpour SGD is an asynchronous variant of SGD that was used by Dean et al. [6] in their DistBelief framework (the predecessor to TensorFlow) at Google. It runs multiple replicas of a model in parallel on subsets of the training data. These models send their updates to a parameter server, which is split across many machines. Each machine is responsible for storing and updating a fraction of the modelâs parameters. However, as replicas donât communicate with each other e.g. by sharing weights or updates, their parameters are continuously at risk of diverging, hindering convergence.
# 5.3 Delay-tolerant Algorithms for SGD
McMahan and Streeter [12] extend AdaGrad to the parallel setting by developing delay-tolerant algorithms that not only adapt to past gradients, but also to the update delays. This has been shown to work well in practice.
# 5.4 TensorFlow | 1609.04747#30 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 30 | Pratik Chaudhari, Anna Choromanska, Stefano Soatto, and Yann LeCun. Entropy-sgd: Biasing gradient descent into wide valleys. arXiv preprint arXiv:1611.01838, 2016.
Anna Choromanska, Mikael Henaff, Michael Mathieu, G´erard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. In AISTATS, 2015.
Dipankar Das, Sasikanth Avancha, Dheevatsa Mudigere, Karthikeyan Vaidynathan, Srinivas Srid- haran, Dhiraj Kalamkar, Bharat Kaul, and Pradeep Dubey. Distributed deep learning using syn- chronous stochastic gradient descent. arXiv preprint arXiv:1602.06709, 2016.
Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in neural information processing systems, pp. 1223â1231, 2012. | 1609.04836#30 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 31 | # 5.4 TensorFlow
TensorFlow14 [1] is Googleâs recently open-sourced framework for the implementation and deploy- ment of large-scale machine learning models. It is based on their experience with DistBelief and is already used internally to perform computations on a large range of mobile devices as well as on large-scale distributed systems. The distributed version, which was released in April 2016 15 relies on a computation graph that is split into a subgraph for every device, while communication takes place using Send/Receive node pairs.
# 5.5 Elastic Averaging SGD
Zhang et al. [23] propose Elastic Averaging SGD (EASGD), which links the parameters of the workers of asynchronous SGD with an elastic force, i.e. a center variable stored by the parameter server. This allows the local variables to ï¬uctuate further from the center variable, which in theory allows for more exploration of the parameter space. They show empirically that this increased capacity for exploration leads to improved performance by ï¬nding new local optima.
# 6 Additional strategies for optimizing SGD
Finally, we introduce additional strategies that can be used alongside any of the previously mentioned algorithms to further improve the performance of SGD. For a great overview of some other common tricks, refer to [11]. | 1609.04747#31 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 31 | J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121â2159, 2011.
Michael Charles Ferris. Weak sharp minima and penalty functions in mathematical programming. PhD thesis, University of Cambridge, 1988.
Michael P Friedlander and Mark Schmidt. Hybrid deterministic-stochastic methods for data ï¬tting. SIAM Journal on Scientiï¬c Computing, 34(3):A1380âA1405, 2012.
John S Garofolo, Lori F Lamel, William M Fisher, Jonathan G Fiscus, David S Pallett, Nancy L Dahlgren, and Victor Zue. Timit acoustic-phonetic continuous speech corpus. Linguistic data consortium, Philadelphia, 33, 1993.
Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle pointsonline stochastic gradient for tensor decomposition. In Proceedings of The 28th Conference on Learning Theory, pp. 797â842, 2015.
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014a. | 1609.04836#31 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 32 | # 14https://www.tensorflow.org/ 15http://googleresearch.blogspot.ie/2016/04/announcing-tensorflow-08-now-with.html
11
# 6.1 Shufï¬ing and Curriculum Learning
Generally, we want to avoid providing the training examples in a meaningful order to our model as this may bias the optimization algorithm. Consequently, it is often a good idea to shufï¬e the training data after every epoch.
On the other hand, for some cases where we aim to solve progressively harder problems, supplying the training examples in a meaningful order may actually lead to improved performance and better convergence. The method for establishing this meaningful order is called Curriculum Learning [3].
Zaremba and Sutskever [21] were only able to train LSTMs to evaluate simple programs using Curriculum Learning and show that a combined or mixed strategy is better than the naive one, which sorts examples by increasing difï¬culty.
# 6.2 Batch normalization
To facilitate learning, we typically normalize the initial values of our parameters by initializing them with zero mean and unit variance. As training progresses and we update parameters to different extents, we lose this normalization, which slows down training and ampliï¬es changes as the network becomes deeper. | 1609.04747#32 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 32 | Ian J Goodfellow, Oriol Vinyals, and Andrew M Saxe. Qualitatively characterizing neural network optimization problems. arXiv preprint arXiv:1412.6544, 2014b.
Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur- In 2013 IEEE international conference on acoustics, speech and signal rent neural networks. processing, pp. 6645â6649. IEEE, 2013.
M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240, 2015.
Sepp Hochreiter and J¨urgen Schmidhuber. Flat minima. Neural Computation, 9(1):1â42, 1997.
10
Published as a conference paper at ICLR 2017
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Nitish Shirish Keskar and Albert S. Berahas. adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs, pp. 1â16. Springer International Publishing, Cham, 2016. | 1609.04836#32 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 33 | Batch normalization [9] reestablishes these normalizations for every mini-batch and changes are back- propagated through the operation as well. By making normalization part of the model architecture, we are able to use higher learning rates and pay less attention to the initialization parameters. Batch normalization additionally acts as a regularizer, reducing (and sometimes even eliminating) the need for Dropout.
# 6.3 Early stopping
According to Geoff Hinton: âEarly stopping (is) beautiful free lunchâ16. You should thus always monitor error on a validation set during training and stop (with some patience) if your validation error does not improve enough.
# 6.4 Gradient noise
Neelakantan et al. [13] add noise that follows a Gaussian distribution N (0, Ï2 update: t ) to each gradient
gt,i = gt,i + N (0, Ï2 t ) (34)
They anneal the variance according to the following schedule:
Ï2 t = η (1 + t)γ (35)
They show that adding this noise makes networks more robust to poor initialization and helps training particularly deep and complex networks. They suspect that the added noise gives the model more chances to escape and ï¬nd new local minima, which are more frequent for deeper models.
# 7 Conclusion | 1609.04747#33 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 33 | D. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR 2015), 2015.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998a.
Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits, 1998b.
Yann A LeCun, L´eon Bottou, Genevieve B Orr, and Klaus-Robert M¨uller. Efï¬cient backprop. In Neural networks: Tricks of the trade, pp. 9â48. Springer, 2012.
Jason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent converges to minimizers. University of California, Berkeley, 1050:16, 2016. | 1609.04836#33 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 34 | # 7 Conclusion
In this article, we have initially looked at the three variants of gradient descent, among which mini- batch gradient descent is the most popular. We have then investigated algorithms that are most commonly used for optimizing SGD: Momentum, Nesterov accelerated gradient, Adagrad, Adadelta, RMSprop, Adam, AdaMax, Nadam, as well as different algorithms to optimize asynchronous SGD. Finally, weâve considered other strategies to improve SGD such as shufï¬ing and curriculum learning, batch normalization, and early stopping.
16NIPS 2015 Tutorial DL-Tutorial-NIPS2015.pdf slides, slide 63, http://www.iro.umontreal.ca/~bengioy/talks/
12
# References | 1609.04747#34 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 34 | Jason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent converges to minimizers. University of California, Berkeley, 1050:16, 2016.
Mu Li, Tong Zhang, Yuqiang Chen, and Alexander J Smola. Efï¬cient mini-batch training for stochastic optimization. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 661â670. ACM, 2014.
David JC MacKay. A practical bayesian framework for backpropagation networks. Neural compu- tation, 4(3):448â472, 1992.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
Hossein Mobahi. Training recurrent neural networks by diffusion. arXiv preprint arXiv:1601.04114, 2016. | 1609.04836#34 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 35 | 12
# References
[1] Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man, Rajat Monga, Sherry Moore, Derek Murray, Jon Shlens, Benoit Steiner, Ilya Sutskever, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Oriol Vinyals, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. 2015.
[2] Yoshua Bengio, Nicolas Boulanger-Lewandowski, and Razvan Pascanu. Advances in Optimiz- ing Recurrent Networks. 2012.
[3] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. Proceedings of the 26th annual international conference on machine learning, pages 41â48, 2009. | 1609.04747#35 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 35 | Hossein Mobahi. Training recurrent neural networks by diffusion. arXiv preprint arXiv:1601.04114, 2016.
Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. The kaldi speech recognition In IEEE 2011 workshop on automatic speech recognition and understanding, number toolkit. EPFL-CONF-192584. IEEE Signal Processing Society, 2011.
Jorma Rissanen. A universal prior for integers and estimation by minimum description length. The Annals of statistics, pp. 416â431, 1983.
Uri Shaham, Yutaro Yamada, and Sahand Negahban. Understanding adversarial training: Increasing local stability of neural nets through robust optimization. arXiv preprint arXiv:1511.05432, 2015.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Daniel Soudry and Yair Carmon. No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv preprint arXiv:1605.08361, 2016. | 1609.04836#35 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 36 | [4] C. Darken, J. Chang, and J. Moody. Learning rate schedules for faster stochastic gradient search. Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop, (September):1â11, 1992.
[5] Yann N. Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non- convex optimization. arXiv, pages 1â14, 2014.
[6] Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, Marc Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large Scale Distributed Deep Networks. NIPS 2012: Neural Information Processing Systems, pages 1â11, 2012.
[7] Timothy Dozat. Incorporating Nesterov Momentum into Adam. ICLR Workshop, (1):2013â2016, 2016.
[8] John Duchi, Elad Hazan, and Yoram Singer. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, 12:2121â2159, 2011. | 1609.04747#36 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 36 | Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15(1):1929â1958, 2014.
I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum In Proceedings of the 30th International Conference on Machine Learning in deep learning. (ICML 2013), pp. 1139â1147, 2013.
11
Published as a conference paper at ICLR 2017
Sixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging sgd. In Advances in Neural Information Processing Systems, pp. 685â693, 2015.
Stephan Zheng, Yang Song, Thomas Leung, and Ian Goodfellow. Improving the robustness of deep neural networks via stability training. arXiv preprint arXiv:1604.04326, 2016.
# A DETAILS ABOUT DATA SETS | 1609.04836#36 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 37 | [9] Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv preprint arXiv:1502.03167v3, 2015.
[10] Diederik P. Kingma and Jimmy Lei Ba. Adam: a Method for Stochastic Optimization. Interna- tional Conference on Learning Representations, pages 1â13, 2015.
[11] Yann LeCun, Leon Bottou, Genevieve B. Orr, and Klaus Robert Müller. Efï¬cient BackProp. Neural Networks: Tricks of the Trade, 1524:9â50, 1998.
[12] H. Brendan Mcmahan and Matthew Streeter. Delay-Tolerant Algorithms for Asynchronous Distributed Online Learning. Advances in Neural Information Processing Systems (Proceedings of NIPS), pages 1â9, 2014.
[13] Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, and James Martens. Adding Gradient Noise Improves Learning for Very Deep Networks. pages 1â11, 2015. | 1609.04747#37 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 37 | # A DETAILS ABOUT DATA SETS
We summarize the data sets used in our experiments in Table 5. TIMIT is a speech recognition data set which is pre-processed using Kaldi (Povey et al., 2011) and trained using a fully-connected network. The rest of the data sets are used without any pre-processing.
Table 5: Data Sets
Data Set MNIST TIMIT CIFAR-10 CIFAR-100 # Data Points Test Train 10000 60000 310621 721329 10000 50000 10000 50000 # Features 28 Ã 28 360 32 Ã 32 32 Ã 32 # Classes Reference 10 1973 10 100 (LeCun et al., 1998a;b) (Garofolo et al., 1993) (Krizhevsky & Hinton, 2009) (Krizhevsky & Hinton, 2009)
B ARCHITECTURE OF NETWORKS
B.1 NETWORK F1
For this network, we use a 784-dimensional input layer followed by 5 batch-normalized (Ioffe & Szegedy, 2015) layers of 512 neurons each with ReLU activations. The output layer consists of 10 neurons with the softmax activation.
B.2 NETWORK F2 | 1609.04836#37 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 38 | [14] Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o(1/k2). Doklady ANSSSR (translated as Soviet.Math.Docl.), 269:543â547.
[15] Feng Niu, Benjamin Recht, R Christopher, and Stephen J Wright. Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent. pages 1â22, 2011.
[16] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global Vectors for Word Representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532â1543, 2014.
[17] Ning Qian. On the momentum term in gradient descent learning algorithms. Neural networks : the ofï¬cial journal of the International Neural Network Society, 12(1):145â151, 1999.
[18] Herbert Robbins and Sutton Monro. A Stochastic Approximation Method. The Annals of Mathematical Statistics, 22(3):400â407, 1951.
[19] Ilya Sutskever. Training Recurrent neural Networks. PhD thesis, page 101, 2013.
13
[20] Richard S. Sutton. Two problems with backpropagation and other steepest-descent learning procedures for networks, 1986. | 1609.04747#38 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
1609.04836 | 38 | B.2 NETWORK F2
The network architecture for F2 is similar to F1. We use a 360-dimensional input layer followed by 7 batch-normalized layers of 512 neurons with ReLU activation. The output layer consists of 1973 neurons with the softmax activation.
B.3 NETWORKS C1 AND C3
The C1 network is a modiï¬ed version of the popular AlexNet conï¬guration (Krizhevsky et al., 2012). For simplicity, denote a stack of n convolution layers of a ï¬lters and a Kernel size of b à c with stride length of d as nÃ[a, b, c, d]. The C1 conï¬guration uses 2 sets of [64, 5, 5, 2]âMaxPool(3) followed by 2 dense layers of sizes (384, 192) and ï¬nally, an output layer of size 10. We use batch- normalization for all layers and ReLU activations. We also use Dropout (Srivastava et al., 2014) of 0.5 retention probability for the two dense layers. The conï¬guration C3 is identical to C1 except it uses 100 softmax outputs instead of 10.
B.4 NETWORKS C2 AND C4 | 1609.04836#38 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04747 | 39 | 13
[20] Richard S. Sutton. Two problems with backpropagation and other steepest-descent learning procedures for networks, 1986.
[21] Wojciech Zaremba and Ilya Sutskever. Learning to Execute. pages 1â25, 2014. [22] Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method. arXiv preprint
arXiv:1212.5701, 2012.
[23] Sixin Zhang, Anna Choromanska, and Yann LeCun. Deep learning with Elastic Averaging SGD. Neural Information Processing Systems Conference (NIPS 2015), pages 1â24, 2015.
14 | 1609.04747#39 | An overview of gradient descent optimization algorithms | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent. | http://arxiv.org/pdf/1609.04747 | Sebastian Ruder | cs.LG | Added derivations of AdaMax and Nadam | null | cs.LG | 20160915 | 20170615 | [
{
"id": "1502.03167"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.