doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1706.03762 | 37 | [36] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. CoRR, abs/1512.00567, 2015.
[37] Vinyals & Kaiser, Koo, Petrov, Sutskever, and Hinton. Grammar as a foreign language. In Advances in Neural Information Processing Systems, 2015.
[38] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Googleâs neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
[39] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forward connections for neural machine translation. CoRR, abs/1606.04199, 2016.
[40] Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. Fast and accurate shift-reduce constituent parsing. In Proceedings of the 51st Annual Meeting of the ACL (Volume 1: Long Papers), pages 434â443. ACL, August 2013.
12 | 1706.03762#37 | Attention Is All You Need | The dominant sequence transduction models are based on complex recurrent or
convolutional neural networks in an encoder-decoder configuration. The best
performing models also connect the encoder and decoder through an attention
mechanism. We propose a new simple network architecture, the Transformer, based
solely on attention mechanisms, dispensing with recurrence and convolutions
entirely. Experiments on two machine translation tasks show these models to be
superior in quality while being more parallelizable and requiring significantly
less time to train. Our model achieves 28.4 BLEU on the WMT 2014
English-to-German translation task, improving over the existing best results,
including ensembles by over 2 BLEU. On the WMT 2014 English-to-French
translation task, our model establishes a new single-model state-of-the-art
BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction
of the training costs of the best models from the literature. We show that the
Transformer generalizes well to other tasks by applying it successfully to
English constituency parsing both with large and limited training data. | http://arxiv.org/pdf/1706.03762 | Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin | cs.CL, cs.LG | 15 pages, 5 figures | null | cs.CL | 20170612 | 20230802 | [
{
"id": "1601.06733"
},
{
"id": "1508.07909"
},
{
"id": "1602.02410"
},
{
"id": "1703.03130"
},
{
"id": "1511.06114"
},
{
"id": "1610.10099"
},
{
"id": "1508.04025"
},
{
"id": "1705.04304"
},
{
"id": "1608.05859"
},
{
"id": "1701.06538"
},
{
"id": "1609.08144"
},
{
"id": "1607.06450"
},
{
"id": "1705.03122"
},
{
"id": "1610.02357"
},
{
"id": "1703.10722"
}
] |
1706.03872 | 37 | # U E L B
U E L B 26 25.4 25.4 25 25.825.825.825.8 25.8 25.6 25.6 25.7 25.6 25.6 25.7 25.7 25.6 25.6 25.5 25.6 25.6 25.6 24 Unnormalized Normalized 1 2 4 8 12 20 30 50 100 200 Beam Size 25.6 25.6 24.7 24 500 1 000
RussianâEnglish 27.727.827.827.727.7 27.6
U E L B 27 26 25.5 25.5 25 1 26.9 26.6 2 27.5 26.9 26.6 26.4 27.1 25.9 25.5 Unnormalized Normalized 24.1 8 12 20 30 50 4 100 200 25.9 24.8 500 1 000 Beam Size
EnglishâRussian | 1706.03872#37 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.03762 | 38 | 12
# Attention Visualizations
2 i i= RE 3 2 i 2 = = 2c 3 2 £ om % S GBANAAAA fe. Re) [a Q â¬oe2s ozeseyzses 26e8 TL _ FFREKR8TZESHBOP_,SSESE DSsSSsSESE ~2£FFEâ¬voFEnvnFECRCoecacKRGNESLSESSCEGC -vVvVVVVV HMO KEBOCSRSHLHOD QLSARBYXE FE OH âA ARAAKRAAA â= <2 £ 8 FogesouggsS ss P25 5273 Qvryxapvs es sa 5 Seeneteecorzgrs = Q ogs aaa oO 2 Sere =~ aA o ° 8 aaqaaq0gqaqg o o o 5 > o wWUvvvvVvV Vv âe â¬E£ © e 2 6 v Do <¢ 8 & | | 1706.03762#38 | Attention Is All You Need | The dominant sequence transduction models are based on complex recurrent or
convolutional neural networks in an encoder-decoder configuration. The best
performing models also connect the encoder and decoder through an attention
mechanism. We propose a new simple network architecture, the Transformer, based
solely on attention mechanisms, dispensing with recurrence and convolutions
entirely. Experiments on two machine translation tasks show these models to be
superior in quality while being more parallelizable and requiring significantly
less time to train. Our model achieves 28.4 BLEU on the WMT 2014
English-to-German translation task, improving over the existing best results,
including ensembles by over 2 BLEU. On the WMT 2014 English-to-French
translation task, our model establishes a new single-model state-of-the-art
BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction
of the training costs of the best models from the literature. We show that the
Transformer generalizes well to other tasks by applying it successfully to
English constituency parsing both with large and limited training data. | http://arxiv.org/pdf/1706.03762 | Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin | cs.CL, cs.LG | 15 pages, 5 figures | null | cs.CL | 20170612 | 20230802 | [
{
"id": "1601.06733"
},
{
"id": "1508.07909"
},
{
"id": "1602.02410"
},
{
"id": "1703.03130"
},
{
"id": "1511.06114"
},
{
"id": "1610.10099"
},
{
"id": "1508.04025"
},
{
"id": "1705.04304"
},
{
"id": "1608.05859"
},
{
"id": "1701.06538"
},
{
"id": "1609.08144"
},
{
"id": "1607.06450"
},
{
"id": "1705.03122"
},
{
"id": "1610.02357"
},
{
"id": "1703.10722"
}
] |
1706.03872 | 38 | EnglishâRussian
23 22 21 19.9 19.9 20 1 21.8 21.7 2 22.3 22.1 22.622.5 22.422.522.422.422.4 22.3 22.222.2 21.9 22.1 21.4 20.7 Unnormalized Normalized 4 8 12 20 30 50 100 200 Beam Size 21.8 21.3 500 1 000
# U E L B
Figure 10: Translation quality with varying beam sizes. For large beams, quality decreases, especially when not normalizing scores by sentence length.
# scores.
However, as Figure 10 illustrates, increasing the beam size does not consistently improve transla- tion quality. In fact, in almost all cases, worse translations are found beyond an optimal beam size setting (we are using again Edinburghâs WMT 2016 systems). The optimal beam size varies from 4 (e.g., CzechâEnglish) to around 30 (Englishâ Romanian).
Normalizing sentence level model scores by length of the output alleviates the problem some- what and also leads to better optimal quality in most cases (5 of the 8 language pairs investigated). Optimal beam sizes are in the range of 30â50 in almost all cases, but quality still drops with larger beams. The main cause of deteriorating quality are shorter translations under wider beams. | 1706.03872#38 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.03762 | 39 | Figure 3: An example of the attention mechanism following long-distance dependencies in the encoder self-attention in layer 5 of 6. Many of the attention heads attend to a distant dependency of the verb âmakingâ, completing the phrase âmaking...more difficultâ. Attentions here shown only for the word âmakingâ. Different colors represent different heads. Best viewed in color.
13
<ped> <ped> UOIUIdO == Aw ul Bulssiw ale « aM = yeum = S| sy ysnf 3q* pinoys = uoluldo Aw ul Bulssiw ae ysnf 38q Pinoys uojeojdde Ss}! nq poped 38q JaAou Me] au <ped> <SOa> uojuido Aw ul Bulssiuw oe aM yeum S| SIU} ysnf 3q Pinoys uojeodde Ss}! ynq yoped 3q 4eAeuU meq auL <ped> <SOa> uo|uldo Aw ul Bulssiuw oe eM yeum S| Siu} ysnf 3q Pinoys uoyeoydde si! ynq yoped 3q aul | 1706.03762#39 | Attention Is All You Need | The dominant sequence transduction models are based on complex recurrent or
convolutional neural networks in an encoder-decoder configuration. The best
performing models also connect the encoder and decoder through an attention
mechanism. We propose a new simple network architecture, the Transformer, based
solely on attention mechanisms, dispensing with recurrence and convolutions
entirely. Experiments on two machine translation tasks show these models to be
superior in quality while being more parallelizable and requiring significantly
less time to train. Our model achieves 28.4 BLEU on the WMT 2014
English-to-German translation task, improving over the existing best results,
including ensembles by over 2 BLEU. On the WMT 2014 English-to-French
translation task, our model establishes a new single-model state-of-the-art
BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction
of the training costs of the best models from the literature. We show that the
Transformer generalizes well to other tasks by applying it successfully to
English constituency parsing both with large and limited training data. | http://arxiv.org/pdf/1706.03762 | Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin | cs.CL, cs.LG | 15 pages, 5 figures | null | cs.CL | 20170612 | 20230802 | [
{
"id": "1601.06733"
},
{
"id": "1508.07909"
},
{
"id": "1602.02410"
},
{
"id": "1703.03130"
},
{
"id": "1511.06114"
},
{
"id": "1610.10099"
},
{
"id": "1508.04025"
},
{
"id": "1705.04304"
},
{
"id": "1608.05859"
},
{
"id": "1701.06538"
},
{
"id": "1609.08144"
},
{
"id": "1607.06450"
},
{
"id": "1705.03122"
},
{
"id": "1610.02357"
},
{
"id": "1703.10722"
}
] |
1706.03872 | 39 | # 4 Conclusions
We showed that, despite its recent successes, neu- ral machine translation still has to overcome vari- ous challenges, most notably performance out-of- domain and under low resource conditions. We hope that this paper motivates research to address these challenges.
guistics, Austin, https://aclweb.org/anthology/D16-1162. Texas, pages 1557â1567.
Dzmitry Cho, Neural machine translation by jointly learning to align and translate. In ICLR. http://arxiv.org/pdf/1409.0473v6.pdf.
Luisa Bentivogli, Arianna and Marcello Bisazza, Mauro 2016. Federico. Cettolo, Neural versus phrase-based machine translation quality: a case study. In Proceedings of Empirical Methods in Processing. Association Linguistics, Austin, Texas, https://aclweb.org/anthology/D16-1025. the 2016 Conference on Language Computational pages 257â267. Natural for | 1706.03872#39 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.03762 | 40 | Figure 4: Two attention heads, also in layer 5 of 6, apparently involved in anaphora resolution. Top: Full attentions for head 5. Bottom: Isolated attentions from just the word âitsâ for attention heads 5 and 6. Note that the attentions are very sharp for this word.
14
<ped> <ped> <SOH>\ <SO3> uoluido = uoluido Aw Aw yeyum S| sim piâf}â 4 -ysn{ | Pinoys «+ pinoys uoeoydde uojeodde si! â=S}! nga A ynq poped pooped aq aq Janou⢠J@AoU WIM TIM me) me) oul OUL
<ped> <ped> so <0 Uo|UIdO uoluido Aw Aw ul ul Bulssiw Bulssiw ae ale aM am yeuM yeum S| S| sty} # sly -âA - el eq eq pinoys « pinoys uojeoidde ee Ss}! Ss}! nq ee popod a â_ ee eq eq JOAoU JOAoU IW IW rr auL auL | 1706.03762#40 | Attention Is All You Need | The dominant sequence transduction models are based on complex recurrent or
convolutional neural networks in an encoder-decoder configuration. The best
performing models also connect the encoder and decoder through an attention
mechanism. We propose a new simple network architecture, the Transformer, based
solely on attention mechanisms, dispensing with recurrence and convolutions
entirely. Experiments on two machine translation tasks show these models to be
superior in quality while being more parallelizable and requiring significantly
less time to train. Our model achieves 28.4 BLEU on the WMT 2014
English-to-German translation task, improving over the existing best results,
including ensembles by over 2 BLEU. On the WMT 2014 English-to-French
translation task, our model establishes a new single-model state-of-the-art
BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction
of the training costs of the best models from the literature. We show that the
Transformer generalizes well to other tasks by applying it successfully to
English constituency parsing both with large and limited training data. | http://arxiv.org/pdf/1706.03762 | Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin | cs.CL, cs.LG | 15 pages, 5 figures | null | cs.CL | 20170612 | 20230802 | [
{
"id": "1601.06733"
},
{
"id": "1508.07909"
},
{
"id": "1602.02410"
},
{
"id": "1703.03130"
},
{
"id": "1511.06114"
},
{
"id": "1610.10099"
},
{
"id": "1508.04025"
},
{
"id": "1705.04304"
},
{
"id": "1608.05859"
},
{
"id": "1701.06538"
},
{
"id": "1609.08144"
},
{
"id": "1607.06450"
},
{
"id": "1705.03122"
},
{
"id": "1610.02357"
},
{
"id": "1703.10722"
}
] |
1706.03872 | 40 | OndËrej Bojar, Rajen Chatterjee, Christian Feder- mann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurelie Neveol, Mariana Neves, Mar- tin Popel, Matt Post, Raphael Rubino, Car- olina Scarton, Lucia Specia, Marco Turchi, and Marcos Zampieri. 2016. Karin Verspoor, Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation. Association Computational Linguistics, Berlin, Germany, pages 131â198. http://www.aclweb.org/anthology/W/W16/W16-2301.
Shahram 2016. Evgeny Matusov, Wenhu Chen, Peter. Jan-Thorsten and abs/1607.01628. | 1706.03872#40 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.03872 | 41 | Shahram 2016. Evgeny Matusov, Wenhu Chen, Peter. Jan-Thorsten and abs/1607.01628.
What a lot of the problems have in common is that the neural translation models do not show robust behavior when confronted with conditions that differ signiï¬cantly from training conditions â may it be due to limited exposure to training data, unusual input in case of out-of-domain test sen- tences, or unlikely initial word choices in beam search. The solution to these problems may hence lie in a more general approach of training that steps outside optimizing single word predictions given perfectly matching prior sequences.
# Khadivi, Guided alignment training for topic-aware neural machine translation. CoRR http://arxiv.org/abs/1607.01628.
2007. Chiang. 33(2).
# David
Hierarchical phrase-based translation. Computational Linguistics http://www.aclweb.org/anthology-new/J/J07/J07-2003.pdf.
Merrienboer,
Kyunghyun Cho,
# Dzmitry
# Bart
# van | 1706.03872#41 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.03872 | 42 | Merrienboer,
Kyunghyun Cho,
# Dzmitry
# Bart
# van
Bahdanau, Bengio. On the properties of neural machine translation: Encoderâdecoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statis- tical Translation. Association for Computa- tional Linguistics, Doha, Qatar, pages 103â111. http://www.aclweb.org/anthology/W14-4012.
# Acknowledgment
This work was partially supported by a Amazon Research Award (to the ï¬rst author) and a Na- tional Science Foundation Graduate Research Fel- lowship under Grant No. DGE-1232825 (to the second author). | 1706.03872#42 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.03872 | 43 | Josep Maria Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Jean Lorieux, Byeongil Ko, Catherine Kobus, Leidiana Martins, Dang-Chuan Nguyen, Alexandra Priori, Thomas Riccardi, Natalia Segal, Christophe Servan, Cyril Tiquet, Bo Wang, Jin Yang, Dakun and Peter Zoldan. 2016. Zhang, Systranâs pure neural machine translation systems. CoRR http://arxiv.org/abs/1610.05540.
# References | 1706.03872#43 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.03872 | 44 | # References
Philip and Incorporating discrete translation lexicons into neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Pro- cessing. Association for Computational LinShuoyang Ding, Kevin Duh, Huda Khayral- and Matt Post. 2016a. lah, Philipp Koehn, The jhu machine translation systems for wmt 2016. In Proceedings of the First Conference on Ma- chine Translation. Association for Computational Linguistics, Berlin, Germany, pages 272â280. http://www.aclweb.org/anthology/W/W16/W16-2310.
Association Linguistics, Seattle, Washington, USA, pages 1700â1709. http://www.aclweb.org/anthology/D13-1176.
Philipp Koehn and Barry Haddow. 2012. Interpolated backoff for factored translation models. In Proceed- ings of the Tenth Conference of the Association for Machine Translation in the Americas (AMTA).
Shuoyang Ding, Kevin Duh, Huda Khayrallah, Philipp Koehn, and Matt Post. 2016b. The JHU machine In Proceed- translation systems for WMT 2016. ings of the First Conference on Machine Translation (WMT). | 1706.03872#44 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.03872 | 45 | Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Christopher J. Dyer, OndËrej Bo- jar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Com- panion Volume Proceedings of the Demo and Poster Sessions. Association for Computational Linguis- tics, Prague, Czech Republic, pages 177â180. http://www.aclweb.org/anthology/P/P07/P07-2045.
Chris and A simple, fast, and effective reparameterization of ibm model 2. the In Proceedings of the Association for North American Chapter of Human Language Computational Linguistics: Technologies. Association for Computational Linguistics, Atlanta, Georgia, pages 644â648. http://www.aclweb.org/anthology/N13-1073. | 1706.03872#45 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.03872 | 46 | Finch, Neural machine translation with supervised attention. In Proceedings of COLING 2016, the 26th Interna- tional Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, pages 3093â3102. http://aclweb.org/anthology/C16-1291.
Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. arXiv preprint arXiv:1612.06897 .
# Michel
Galley,
Graehl,
# Jonathan
# Kevin
Knight, Wei Wang, Scalable inference and training of context-rich syntactic translation models. In Proceedings of the 21st International Confer- ence on Computational Linguistics and 44th Annual Meeting of the Association for Computa- tional Linguistics. Association for Computational Linguistics, Sydney, Australia, pages 961â968. http://www.aclweb.org/anthology/P/P06/P06-1121.
Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spo- In Proceedings of the In- ken language domains. ternational Workshop on Spoken Language Transla- tion. | 1706.03872#46 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.03872 | 47 | Thang Luong, Vinyals, Addressing the rare word problem in neural machine translation. the 53rd Annual Meeting In Proceedings of of the Association for Computational Linguis- tics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers). Association for Computa- tional Linguistics, Beijing, China, pages 11â19. http://www.aclweb.org/anthology/P15-1002.
Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. Whatâs in a translation rule? In Proceedings of the Joint Conference on Hu- man Language Technologies and the Annual Meet- ing of the North American Chapter of the Associ- ation of Computational Linguistics (HLT-NAACL). http://www.aclweb.org/anthology/N04-1035.pdf.
# Ann
# Irvine
# and
# Chris
# Callison-Burch.
2013.
# Dzmitry
# Jean
Pouget-Abadie,
# Dzmitry
# Bahdanau, KyungHyun 2014. | 1706.03872#47 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.03872 | 48 | # Callison-Burch.
2013.
# Dzmitry
# Jean
Pouget-Abadie,
# Dzmitry
# Bahdanau, KyungHyun 2014.
Combining bilingual and comparable corpora for low resource machine translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation. Association for Computa- tional Linguistics, Soï¬a, Bulgaria, pages 262â270. http://www.aclweb.org/anthology/W13-2233.
Bart Cho, Overcoming the curse of sentence length for neural machine translation using automatic segmentation. CoRR http://arxiv.org/abs/1409.1257.
# van and
# Merrienboer, Yoshua
# Bengio.
abs/1409.1257.
# Marcin
# Junczys-Dowmunt, and
# Tomasz 2016.
Rico Sennrich, Orhan Firat, Kyunghyun Cho, Julian Samuel Barone, 2017. Dwojak, Is neural machine translation ready for deployment? a case study on 30 translation directions. In Proceedings of on Translation Spoken http://workshop2016.iwslt.org/downloads/IWSLT 2016 paper 4.pdf. Hieu Hoang. the International Workshop (IWSLT). Language and Maria Nadejde. | 1706.03872#48 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.03872 | 51 | Linguistics, Valencia, http://aclweb.org/anthology/E17-3017. Spain, pages 65â68. Rico Sennrich, Barry Haddow, 2016a. and Edinburgh neural machine translation systems for WMT 16. In Proceedings of the First Conference on Machine Translation (WMT). Association for Computa- tional Linguistics, Berlin, Germany, pages 371â376. http://www.aclweb.org/anthology/W/W16/W16-2323. Alexandra Birch. abs/1609.08144. Rico Sennrich, Barry Birch. Haddow, 2016b. and Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1715â1725. http://www.aclweb.org/anthology/P16-1162. Alexandra J¨org Tiedemann. 2012. Parallel data, tools and in- terfaces in opus. In Nicoletta Calzolari (Con- ference Chair), Khalid Choukri, Thierry Declerck, Mehmet Ugur Dogan, Bente | 1706.03872#51 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.03872 | 52 | in opus. In Nicoletta Calzolari (Con- ference Chair), Khalid Choukri, Thierry Declerck, Mehmet Ugur Dogan, Bente Maegaard, Joseph Mariani, Jan Odijk, and Stelios Piperidis, edi- tors, Proceedings of the Eight International Con- ference on Language Resources and Evaluation (LRECâ12). European Language Resources Associ- ation (ELRA), Istanbul, Turkey. Antonio Toral and V´ıctor M. S´anchez-Cartagena. 2017. A multifaceted evaluation of neural versus phrase-based machine translation for 9 language directions. In Proceedings of the European Chapter of Computational Linguistics: Papers. Association guistics, Valencia, Spain, http://www.aclweb.org/anthology/E17-1100. the 15th Conference of the Association for Volume 1, Long for Computational Lin- pages 1063â1073. Zhaopeng Xiaohua Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Compu- tational Linguistics, Berlin, Germany, pages 76â85. | 1706.03872#52 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.03872 | 54 | Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Cor- rado, Macduff Hughes, and Jeffrey Dean. 2016. Googleâs neural machine translation system: Bridging the gap between human and machine translation. CoRR http://arxiv.org/abs/1609.08144.pdf.
Learning performance of a machine translation system: a statistical and computational analysis. In Proceedings of the Third Workshop on Statistical Machine Translation. Association for Computa- tional Linguistics, Columbus, Ohio, pages 35â43. http://www.aclweb.org/anthology/W/W08/W08-0305.
Nadejde, Haddow, Edinburghâs statistical machine translation systems for wmt16. In Proceedings of the First Conference on Machine Translation. Association Computational Linguistics, Berlin, Germany, pages 399â410. http://www.aclweb.org/anthology/W/W16/W16-2327. | 1706.03872#54 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | [
{
"id": "1706.03872"
},
{
"id": "1612.06897"
}
] |
1706.02633 | 0 | 7 1 0 2
c e D 4 ] L M . t a t s [ 2 v 3 3 6 2 0 . 6 0 7 1 : v i X r a
# REAL-VALUED (MEDICAL) TIME SERIES GENERA- TION WITH RECURRENT CONDITIONAL GANS
Stephanie L. Hylandâ ETH Zurich, Switzerland Tri-Institutional Training Program in Computational Biology and Medicine, Weill Cornell Medical [email protected]
Cristóbal Estebanâ ETH Zurich, Switzerland [email protected] Gunnar Rätsch ETH Zurich, Switzerland [email protected]
# ABSTRACT | 1706.02633#0 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02515 | 1 | Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are âscaled exponential linear unitsâ (SELUs), which induce self-normalizing properties. Using the Banach ï¬xed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance â even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization schemes, and (3) to make learning highly robust. Furthermore, for activations not close to unit | 1706.02515#1 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 1 | Generative Adversarial Networks (GANs) have shown remarkable success as a framework for training models to produce realistic-looking data. In this work, we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data. RGANs make use of recurrent neural networks (RNNs) in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. We demonstrate our models in a set of toy datasets, where we show visually and quantitatively (using sample likelihood and maximum mean discrepancy) that they can successfully generate realistic time-series. We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa. We illustrate with these metrics that RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data. This is demonstrated on digit classiï¬cation from âserialisedâ MNIST and by training an early warning | 1706.02633#1 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 1 | Deep learning thrives with large neural networks and larger networks and larger large datasets. However, datasets result in longer training times that impede re- search and development progress. Distributed synchronous SGD offers a potential solution to this problem by dividing SGD minibatches over a pool of parallel workers. Yet to make this scheme efï¬cient, the per-worker workload must be large, which implies nontrivial growth in the SGD mini- batch size. In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization dif- ï¬culties, but when these are addressed the trained networks exhibit good generalization. Speciï¬cally, we show no loss of accuracy when training with large minibatch sizes up to 8192 images. To achieve this result, we adopt a hyper- parameter-free linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training. With these simple techniques, our Caffe2- based system trains ResNet-50 with a minibatch size of 8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using | 1706.02677#1 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 2 | train deep networks with many layers, (2) employ strong regularization schemes, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs, and other machine learning methods such as random forests and support vector machines. For FNNs we considered (i) ReLU networks without normalization, (ii) batch normalization, (iii) layer normalization, (iv) weight normalization, (v) highway networks, and (vi) residual networks. SNNs signiï¬cantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs. | 1706.02515#2 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 2 | minor degradation in performance on real test data. This is demonstrated on digit classiï¬cation from âserialisedâ MNIST and by training an early warning system on a medical dataset of 17,000 patients from an intensive care unit. We further discuss and analyse the privacy concerns that may arise when using RCGANs to generate realistic synthetic medical time series data, and demonstrate results from differentially private training of the RCGAN. | 1706.02633#2 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02515 | 3 | Accepted for publication at NIPS 2017; please cite as: Klambauer, G., Unterthiner, T., Mayr, A., & Hochreiter, S. (2017). Self-Normalizing Neural Networks. Processing Systems (NIPS).
# Introduction
Deep Learning has set new records at different benchmarks and led to various commercial applications [25, 33]. Recurrent neural networks (RNNs) [18] achieved new levels at speech and natural language
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. | 1706.02515#3 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 3 | # INTRODUCTION
Access to data is one of the bottlenecks in the development of machine learning solutions to domain- speciï¬c problems. The availability of standard datasets (with associated tasks) has helped to advance the capabilities of learning systems in multiple tasks. However, progress appears to lag in other ï¬elds, such as medicine. It is tempting to suggest that tasks in medicine are simply harder - the data more complex, more noisy, the prediction problems less clearly deï¬ned. Regardless of this, the dearth of data accessible to researchers hinders model comparisons, reproducibility and ultimately scientiï¬c progress. However, due to the highly sensitive nature of medical data, its access is typically highly controlled, or require involved and likely imperfect de-identiï¬cation. The motivation for this work is therefore to exploit and develop the framework of generative adversarial networks (GANs) to generate realistic synthetic medical data. This data could be shared and published without privacy concerns, or even used to augment or enrich similar datasets collected in different or smaller cohorts of patients. Moreover, building a system capable of synthesizing realistic medical data implies modelling the processes that generates such information, and therefore it can represent the ï¬rst step towards developing a new approach for creating predictive systems in medical environments. | 1706.02633#3 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 3 | iS 3S oo a o i=) 25 ImageNet top-1 validation error 20 L L L L 64 128 256 512 1k 2k 4k 8k mini-batch size 16k 32k 64k
Figure 1. ImageNet top-1 validation error vs. minibatch size. Error range of plus/minus two standard deviations is shown. We present a simple and general technique for scaling distributed syn- chronous SGD to minibatches of up to 8k images while maintain- ing the top-1 error of small minibatch training. For all minibatch sizes we set the learning rate as a linear function of the minibatch size and apply a simple warmup phase for the ï¬rst few epochs of training. All other hyper-parameters are kept ï¬xed. Using this simple approach, accuracy of our models is invariant to minibatch size (up to an 8k minibatch size). Our techniques enable a lin- ear reduction in training time with â¼90% efï¬ciency as we scale to large minibatch sizes, allowing us to train an accurate 8k mini- batch ResNet-50 model in 1 hour on 256 GPUs.
# 1. Introduction | 1706.02677#3 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 4 | 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
processing, for example at the TIMIT benchmark [12] or at language translation [36], and are already employed in mobile devices [31]. RNNs have won handwriting recognition challenges (Chinese and Arabic handwriting) [33, 13, 6] and Kaggle challenges, such as the âGrasp-and Lift EEGâ competition. Their counterparts, convolutional neural networks (CNNs) [24] excel at vision and video tasks. CNNs are on par with human dermatologists at the visual detection of skin cancer [9]. The visual processing for self-driving cars is based on CNNs [19], as is the visual input to AlphaGo which has beaten one of the best human GO players [34]. At vision challenges, CNNs are constantly winning, for example at the large ImageNet competition [23, 16], but also almost all Kaggle vision challenges, such as the âDiabetic Retinopathyâ and the âRight Whaleâ challenges [8, 14]. | 1706.02515#4 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 4 | Beyond the utility to the machine learning research community, such a tool stands to beneï¬t the medical community for use in training simulators. In this work, we focus on synthesising real-valued
âAuthors contributed equally.
time-series data as from an Intensive Care Unit (ICU). In ICUs, doctors have to make snap decisions under time pressure, where they cannot afford to hesitate. It is already standard in medical training to use simulations to train doctors, but these simulations often rely on hand-engineered rules and physical props. Thus, a model capable of generating diverse and realistic ICU situations could have an immediate application, especially when given the ability to condition on underlying âstatesâ of the patient.
The success of GANs in generating realistic-looking images (Radford et al., 2015; Ledig et al., 2016; Gauthier, 2014; Reed et al., 2016) suggests their applicability for this task, however limited work has exploited them for generating time-series data. In addition, evaluation of GANs remains a largely-unsolved problem, with researchers often relying on visual evaluation of generated examples, an approach which is both impractical and inappropriate for multi-dimensional medical time series.
The primary contributions of this work are:
1. Demonstration of a method to generate real-valued sequences using adversarial training.
2. Showing novel approaches for evaluating GANs. | 1706.02633#4 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 4 | # 1. Introduction
Scale matters. We are in an unprecedented era in AI research history in which the increasing data and model scale is rapidly improving accuracy in computer vision [22, 41, 34, 35, 36, 16], speech [17, 40], and natural lan- guage processing [7, 38]. Take the profound impact in com- puter vision as an example: visual representations learned by deep convolutional neural networks [23, 22] show excel- lent performance on previously challenging tasks like Ima- geNet classiï¬cation [33] and can be transferred to difï¬cult perception problems such as object detection and segmentation [8, 10, 28]. Moreover, this pattern generalizes: larger datasets and neural network architectures consistently yield improved accuracy across all tasks that beneï¬t from pre- training [22, 41, 34, 35, 36, 16]. But as model and data scale grow, so does training time; discovering the potential and limits of large-scale deep learning requires developing novel techniques to keep training time manageable. | 1706.02677#4 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 5 | However, looking at Kaggle challenges that are not related to vision or sequential tasks, gradient boosting, random forests, or support vector machines (SVMs) are winning most of the competitions. Deep Learning is notably absent, and for the few cases where FNNs won, they are shallow. For example, the HIGGS challenge, the Merck Molecular Activity challenge, and the Tox21 Data challenge were all won by FNNs with at most four hidden layers. Surprisingly, it is hard to ï¬nd success stories with FNNs that have many hidden layers, though they would allow for different levels of abstract representations of the input [3]. | 1706.02515#5 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 5 | The primary contributions of this work are:
1. Demonstration of a method to generate real-valued sequences using adversarial training.
2. Showing novel approaches for evaluating GANs.
3. Generating synthetic medical time series data.
4. Empirical privacy analysis of both GANs and differential private GANs.
# 2 RELATED WORK
Since their inception in 2014 (Goodfellow et al., 2014), the GAN framework has attracted signiï¬cant attention from the research community, and much of this work has focused on image generation (Rad- ford et al., 2015; Ledig et al., 2016; Gauthier, 2014; Reed et al., 2016). Notably, (Choi et al., 2017) designed a GAN to generate synthetic electronic health record (EHR) datasets. These EHRs contain binary and count variables, such as ICD-9 billing codes, medication, and procedure codes. Their focus on discrete-valued data and generating snapshots of a patient is complementary to our real-valued, time series focus. Future work could combine these approaches to generate multi-modal synthetic medical time-series data. | 1706.02633#5 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 5 | The goal of this report is to demonstrate the feasibility of, and to communicate a practical guide to, large-scale train- ing with distributed synchronous stochastic gradient descent (SGD). As an example, we scale ResNet-50 [16] training, originally performed with a minibatch size of 256 images (using 8 Tesla P100 GPUs, training time is 29 hours), to larger minibatches (see Figure 1). In particular, we show that with a large minibatch size of 8192, we can train ResNet-50 in 1 hour using 256 GPUs while maintaining
1
the same level of accuracy as the 256 minibatch baseline. While distributed synchronous SGD is now commonplace, no existing results show that generalization accuracy can be maintained with minibatches as large as 8192 or that such high-accuracy models can be trained in such short time. | 1706.02677#5 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 6 | To robustly train very deep CNNs, batch normalization evolved into a standard to normalize neuron activations to zero mean and unit variance [20]. Layer normalization [2] also ensures zero mean and unit variance, while weight normalization [32] ensures zero mean and unit variance if in the previous layer the activations have zero mean and unit variance. However, training with normalization techniques is perturbed by stochastic gradient descent (SGD), stochastic regularization (like dropout), and the estimation of the normalization parameters. Both RNNs and CNNs can stabilize learning via weight sharing, therefore they are less prone to these perturbations. In contrast, FNNs trained with normalization techniques suffer from these perturbations and have high variance in the training error (see Figure 1). This high variance hinders learning and slows it down. Furthermore, strong regularization, such as dropout, is not possible as it would further increase the variance which in turn would lead to divergence of the learning process. We believe that this sensitivity to perturbations is the reason that FNNs are less successful than RNNs and CNNs. | 1706.02515#6 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 6 | The majority of sequential data generation with GANs has focused on discrete tokens useful for natural language processing (Yu et al., 2016), where an alternative approach based on Reinforcement Learning (RL) is used to train the GAN. We are aware of only one preliminary work using GANs to generate continuous-valued sequences, which aims to produce polyphonic music using a GAN with LSTM generator and discriminator (Mogren, 2016). The primary differences are architectural: we do not use a bidirectional discriminator, and outputs of the generator are not fed back as inputs at the next time step. Moreover, we introduce also a conditional version of this Recurrent GAN.
Conditional GANs (Mirza & Osindero, 2014; Gauthier, 2014) condition the model on additional information and therefore allow us to direct the data generation process. This approach has been mainly used for image generation tasks (Radford et al., 2015; Mirza & Osindero, 2014; Antipov et al., 2017). Recently, Conditional GAN architectures have been also used in natural language processing, including translation (Yang et al., 2017) and dialogue generation (Li et al., 2017), where none of them uses an RNN as the preferred choice for the discriminator and, as previously mentioned, a RL approach is used to train the models due to the discrete nature of the data. | 1706.02633#6 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 6 | To tackle this unusually large minibatch size, we employ a simple and hyper-parameter-free linear scaling rule to ad- just the learning rate. While this guideline is found in ear- lier work [21, 4], its empirical limits are not well under- stood and informally we have found that it is not widely known to the research community. To successfully apply this rule, we present a new warmup strategy, i.e., a strategy of using lower learning rates at the start of training [16], to overcome early optimization difï¬culties. Importantly, not only does our approach match the baseline validation error, but also yields training error curves that closely match the small minibatch baseline. Details are presented in §2. | 1706.02677#6 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 7 | Self-normalizing neural networks (SNNs) are robust to perturbations and do not have high variance in their training errors (see Figure 1). SNNs push neuron activations to zero mean and unit variance thereby leading to the same effect as batch normalization, which enables to robustly learn many layers. SNNs are based on scaled exponential linear units âSELUsâ which induce self-normalizing properties like variance stabilization which in turn avoids exploding and vanishing gradients.
# Self-normalizing Neural Networks (SNNs) | 1706.02515#7 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 7 | In this work, we also introduce some novel approaches to evaluate GANs, using the capability of the generated synthetic data to train supervised models. In a related fashion, a GAN-based semi- supervised learning approach was introduced in (Salimans et al., 2016). However, our goal is to generate data that can be used to train models for tasks that are unknown at the moment the GAN is trained.
We brieï¬y explore the use of differentially private stochastic gradient descent (Abadi et al., 2016) to produce a RGAN with stronger privacy guarantees, which is especially relevant for sensitive medical data. An alternate method would be to use the PATE approach (Papernot et al., 2016) to train the discriminator. In this case, rather than introducing noise into gradients (as in (Abadi et al., 2016)), a student classiï¬er is trained to predict the noisy votes of an ensemble of teachers, each trained on disjoint sets of the data.
# 3 MODELS: RECURRENT GAN AND RECURRENT CONDITIONAL GAN | 1706.02633#7 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 7 | Our comprehensive experiments in §5 show that opti- mization difï¬culty is the main issue with large minibatches, rather than poor generalization (at least on ImageNet), in contrast to some recent studies [20]. Additionally, we show that the linear scaling rule and warmup generalize to more complex tasks including object detection and instance seg- mentation [9, 31, 14, 28], which we demonstrate via the recently developed Mask R-CNN [14]. We note that a ro- bust and successful guideline for addressing a wide range of minibatch sizes has not been presented in previous work. While the strategy we deliver is simple, its successful application requires correct implementation with respect to seemingly minor and often not well understood implemen- tation details within deep learning libraries. Subtleties in the implementation of SGD can lead to incorrect solutions that are difï¬cult to discover. To provide more helpful guidance we describe common pitfalls and the relevant implementa- tion details that can trigger these traps in §3. Our strategy applies regardless of | 1706.02677#7 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 8 | # Self-normalizing Neural Networks (SNNs)
Normalization and SNNs. For a neural network with activation function f, we consider two consecutive layers that are connected by a weight matrix W. Since the input to a neural network is a random variable, the activations a in the lower layer, the network inputs z = Wa, and the activations y = f(z) in the higher layer are random variables as well. We assume that all activations x; of the lower layer have mean ys := E(x;) and variance v := Var(x;). An activation y in the higher layer has mean ji := E(y) and variance 7 := Var(y). Here E(.) denotes the expectation and Var(.) the variance of a random variable. A single activation y = f(z) has net input z = w? a. For n units with activation x;,1 < i < nin the lower layer, we define n times the mean of the weight vector w ⬠Râ asw := S77, w; and n times the second moment as 7 := D7, w? ro We consider the mapping g that maps mean and variance of the activations from one layer to mean and variance of the activations in the next layer
HM BY. (HB) _ (he (eH Oa). o | 1706.02515#8 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 8 | # 3 MODELS: RECURRENT GAN AND RECURRENT CONDITIONAL GAN
The model presented in this work follows the architecture of a regular GAN, where both the generator and the discriminator have been substituted by recurrent neural networks. Therefore, we present a Recurrent GAN (RGAN), which can generate sequences of real-valued data, and a Recurrent Conditional GAN (RCGAN), which can generate sequences of real-value data subject to some conditional inputs. As depicted in Figure 1a, the generator RNN takes a different random seed at each time step, plus an additional input if we want to condition the generated sequence with additional data. In Figure 1b, we show how the discriminator RNN takes the generated sequence, together with an additional input if it is a RCGAN, and produces a classiï¬cation as synthetic or real for each time step of the input sequence. | 1706.02633#8 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 8 | framework, but achieving efï¬cient linear scaling requires nontrivial com- munication algorithms. We use the open-source Caffe21 deep learning framework and Big Basin GPU servers [24], which operates efï¬ciently using standard Ethernet network- ing (as opposed to specialized network interfaces). We de- scribe the systems algorithms that enable our approach to operate near its full potential in §4.
The practical advances described in this report are help- ful across a range of domains. In an industrial domain, our system unleashes the potential of training visual models from internet-scale data, enabling training with billions of images per day. Of equal importance, in a research domain, we have found it to simplify migrating algorithms from a single-GPU to a multi-GPU implementation without requir- ing hyper-parameter search, e.g. in our experience migrat- ing Faster R-CNN [31] and ResNets [16] from 1 to 8 GPUs.
1http://www.caffe2.ai
2
# 2. Large Minibatch SGD
We start by reviewing the formulation of Stochastic Gra- dient Descent (SGD), which will be the foundation of our discussions in the following sections. We consider super- vised learning by minimizing a loss L(w) of the form: 1 |X|
# wEX | 1706.02677#8 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 9 | HM BY. (HB) _ (he (eH Oa). o
Normalization techniques like batch, layer, or weight normalization ensure a mapping g that keeps (u,v) and (ji, 7) close to predefined values, typically (0, 1). Definition 1 (Self-normalizing neural net). A neural network is self-normalizing if it possesses a mapping g : + Q for each activation y that maps mean and variance from one layer to the next
2
â BatchNorm Depth & â BatchNorm Depth & â BatchNorm Depth 16 â BatchNorm Depth 16 â BatchNorm Depth 32 1 â BatchNorm Depth 32 ââ SNN Depth 8 â SNN Depth 8 â SNN Depth 16 â SNN Depth 16 â SNN Depth 32 â SNN Depth 32 19 loss ining logs Trainin Tri wy 10-* 108g 10-5 ° 0 250 500 750 teens 1250 1500 1750 2000 0 250 500 750 teens 1250 1500 1750 2000 | 1706.02515#9 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 9 | Speciï¬cally, the discriminator is trained to minimise the average negative cross-entropy between its predictions per time-step and the labels of the sequence. If we denote by RNN(X) the vector or t=1 (xt â Rd), matrix comprising the T outputs from a RNN receiving a sequence of T vectors {xt}T and by CE(a, b) the average cross-entropy between sequences a and b, then the discriminator loss for a pair {Xn, yn} (with Xn â RT Ãd and yn â {1, 0}T ) is:
Dloss(Xn, yn) = âCE(RNND(Xn), yn) For real sequences, yn is a vector of 1s, or 0s for synthetic sequences. In each training minibatch, the discriminator sees both real and synthetic sequences.
The objective for the generator is then to âtrickâ the discriminator into classifying its outputs as true, that is, it wishes to minimise the (average) negative cross-entropy between the discriminatorâs predictions on generated sequences and the âtrueâ label, the vector of 1s (we write as 1); | 1706.02633#9 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 9 | # wEX
Here w are the weights of a network, X is a labeled training set, and l(x, w) is the loss computed from samples x â X and their labels y. Typically l is the sum of a classiï¬cation loss (e.g., cross-entropy) and a regularization loss on w.
Minibatch Stochastic Gradient Descent [32], usually re- ferred to as simply as SGD in recent literature even though it operates on minibatches, performs the following update: 1 n
# 2eB
Here B is a minibatch sampled from X and n = |B| is the minibatch size, η is the learning rate, and t is the iteration index. Note that in practice we use momentum SGD; we return to a discussion of momentum in §3.
# 2.1. Learning Rates for Large Minibatches
Our goal is to use large minibatches in place of small minibatches while maintaining training and generalization accuracy. This is of particular interest in distributed learn- ing, because it can allow us to scale to multiple workers2 us- ing simple data parallelism without reducing the per-worker workload and without sacriï¬cing model accuracy.
As we will show in comprehensive experiments, we found that the following learning rate scaling rule is sur- prisingly effective for a broad range of minibatch sizes: | 1706.02677#9 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 10 | Figure 1: The left panel and the right panel show the training error (y-axis) for feed-forward neural networks (FNNs) with batch normalization (BatchNorm) and self-normalizing networks (SNN) across update steps (x-axis) on the MNIST dataset the CIFAR10 dataset, respectively. We tested networks with 8, 16, and 32 layers and learning rate 1e-5. FNNs with batch normalization exhibit high variance due to perturbations. In contrast, SNNs do not suffer from high variance as they are more robust to perturbations and learn faster.
and has a stable and attracting ï¬xed point depending on (Ï, Ï ) in â¦. Furthermore, the mean and the variance remain in the domain â¦, that is g(â¦) â â¦, where ⦠= {(µ, ν) | µ â [µmin, µmax], ν â [νmin, νmax]}. When iteratively applying the mapping g, each point within ⦠converges to this ï¬xed point. | 1706.02515#10 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 10 | # Gloss(Zn) = Dloss(RNNG(Zn), 1) = âCE(RNND(RNNG(Zn)), 1)
Here Zn is a sequence of T points {zt}T t=1 sampled independently from the latent/noise space Z, thus Zn â RT Ãm since Z = Rm. Initial experimentation with non-independent sampling did not indicate any obvious beneï¬t, but would be a topic for further investigation.
In this work, LSTM (Hochreiter & Schmidhuber, 1997). the architecture selected for both discriminator and generator RNNs is the
In the conditional case (RCGAN), the inputs to each RNN are augmented with some conditional information cn (for sample n, say) by concatenation at each time-step; xnt â [xnt; cn]
# znt â [znt; cn]
In this way the RNN cannot discount the conditional information through forgetting. | 1706.02633#10 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 10 | As we will show in comprehensive experiments, we found that the following learning rate scaling rule is sur- prisingly effective for a broad range of minibatch sizes:
Linear Scaling Rule: When the minibatch size is multiplied by k, multiply the learning rate by k.
All other hyper-parameters (weight decay, etc.) are kept un- changed. As we will show in §5, the linear scaling rule can help us to not only match the accuracy between using small and large minibatches, but equally importantly, to largely match their training curves, which enables rapid debugging and comparison of experiments prior to convergence.
Interpretation. We present an informal discussion of the linear scaling rule and why it may be effective. Consider a network at iteration t with weights wt, and a sequence of k minibatches Bj for 0 ⤠j < k each of size n. We compare the effect of executing k SGD iterations with small minibatches Bj and learning rate η versus a single iteration with a large minibatch âªjBj of size kn and learning rate Ëη.
2We use the terms âworkerâ and âGPUâ interchangeably in this work, al- though other implementations of a âworkerâ are possible. âServerâ denotes a set of 8 GPUs that does not require communication over a network. | 1706.02677#10 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 11 | Therefore, we consider activations of a neural network to be normalized, if both their mean and their variance across samples are within predeï¬ned intervals. If mean and variance of x are already within these intervals, then also mean and variance of y remain in these intervals, i.e., the normalization is transitive across layers. Within these intervals, the mean and variance both converge to a ï¬xed point if the mapping g is applied iteratively. | 1706.02515#11 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 11 | # znt â [znt; cn]
In this way the RNN cannot discount the conditional information through forgetting.
Promising research into alternative GAN objectives, such as the Wasserstein GAN (Arjovsky et al., 2017; Gulrajani et al., 2017) unfortunately do not ï¬nd easy application to RGANs in our experiments. Enforcing the Lipschitz constraint on an RNN is a topic for further research, but may be aided by use of unitary RNNs (Arjovsky et al., 2016; Hyland & Rätsch, 2017).
All models and experiments were implemented in python with scikit-learn (Pedregosa et al., 2011) and Tensorï¬ow (Abadi et al., 2015), and the code is available in a public git repository: ANON.
3.1 EVALUATION | 1706.02633#11 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 11 | According to (2), after k iterations of SGD with learning rate η and a minibatch size of n we have:
1 Werk = We = Ss > VU (x, wi4j)- GB) j<k câ¬B;
On the other hand, taking a single step with the large mini- batch âªjBj of size kn and learning rate Ëη yields:
Wiel = Wi - ae > > Vi(x, wz). (4) j<k â¬B;
the updates differ, and it is unlikely that As expected, Ëwt+1 = wt+k. However, if we could assume âl(x, wt) â âl(x, wt+j) for j < k, then setting Ëη = kη would yield Ëwt+1 â wt+k, and the updates from small and large mini- batch SGD would be similar. Although this is a strong as- sumption, we emphasize that if it were true the two updates are similar only if we set Ëη = kη. | 1706.02677#11 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 12 | Therefore, SNNs keep normalization of activations when propagating them through layers of the network. The normalization effect is observed across layers of a network: in each layer the activations are getting closer to the ï¬xed point. The normalization effect can also observed be for two ï¬xed layers across learning steps: perturbations of lower layer activations or weights are damped in the higher layer by drawing the activations towards the ï¬xed point. If for all y in the higher layer, Ï and Ï of the corresponding weight vector are the same, then the ï¬xed points are also the same. In this case we have a unique ï¬xed point for all activations y. Otherwise, in the more general case, Ï and Ï differ for different y but the mean activations are drawn into [µmin, µmax] and the variances are drawn into [νmin, νmax].
Constructing Self-Normalizing Neural Networks. We aim at constructing self-normalizing neu- ral networks by adjusting the properties of the function g. Only two design choices are available for the function g: (1) the activation function and (2) the initialization of the weights. | 1706.02515#12 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 12 | 3.1 EVALUATION
Evaluating the performance of a GAN is challenging. As illustrated in (Theis et al., 2015) and (Wu et al., 2016), evaluating likelihoods, with Parzen window estimates (Wu et al., 2016) or otherwise can be deceptive, and the generator and discriminator losses do not readily correspond to âvisual qualityâ. This nebulous notion of quality is best assessed by a human judge, but it is impractical and costly to do so. In the imaging domain, scores such as the Inception score (Salimans et al., 2016) have been developed to aid in evaluation, and Mechanical Turk exploited to distribute the human labour. However, in the case of real-valued sequential data, is not always easy or even possible to visually evaluate the generated data. For example, the ICU signals with which we work in this paper, could look completely random to a non-medical expert.
Therefore, in this work, we start by demonstrating our model with a number of toy datasets that can be visually evaluated. Next, we use a set of quantiï¬able methods (description below) that can be used as an indicator of the data quality.
generated sample generator conditional inputs Z.latent!
real or fake? YOOUO=O CoeRee | discriminator conditional inputs real or generated sample | 1706.02633#12 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 12 | The above interpretation gives intuition for one case where we may hope the linear scaling rule to apply. In our experiments with Ëη = kη (and warmup), small and large minibatch SGD not only result in models with the same ï¬- nal accuracy, but also, the training curves match closely. Our empirical results suggest that the above approximation might be valid in large-scale, real-world data.
However, there are at least two cases when the condition âl(x, wt) â âl(x, wt+j) will clearly not hold. First, in ini- tial training when the network is changing rapidly, it does not hold. We address this by using a warmup phase, dis- cussed in §2.2. Second, minibatch size cannot be scaled in- deï¬nitely: while results are stable for a large range of sizes, beyond a certain point accuracy degrades rapidly. Interest- ingly, this point is as large as â¼8k in ImageNet experiments. | 1706.02677#12 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 13 | For the activation function, we propose âscaled exponential linear unitsâ (SELUs) to render a FNN as self-normalizing. The SELU activation function is given by
=aft ifz>0- Q) ae*âa ifx<0
SELUs allow to construct a mapping g with properties that lead to SNNs. SNNs cannot be derived with (scaled) rectiï¬ed linear units (ReLUs), sigmoid units, tanh units, and leaky ReLUs. The activation function is required to have (1) negative and positive values for controlling the mean, (2) saturation regions (derivatives approaching zero) to dampen the variance if it is too large in the lower layer, (3) a slope larger than one to increase the variance if it is too small in the lower layer, (4) a continuous curve. The latter ensures a ï¬xed point, where variance damping is equalized by variance increasing. We met these properties of the activation function by multiplying the exponential linear unit (ELU) [7] with λ > 1 to ensure a slope larger than one for positive net inputs.
3 | 1706.02515#13 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 13 | generated sample generator conditional inputs Z.latent!
real or fake? YOOUO=O CoeRee | discriminator conditional inputs real or generated sample
(a) The generator RNN takes a different random seed at each temporal input, and produces a synthetic signal. In the case of the RCGAN, it also takes an additional input on each time step that conditions the output.
(b) The discriminator RNN takes real/synthetic se- quences and produces a classiï¬cation into real/synthetic for each time step. In the case of the RCGAN, it also takes an additional input on each time step that condi- tions the output.
Figure 1: Architecture of Recurrent GAN and Conditional Recurrent GAN models.
3.1.1 MAXIMUM MEAN DISCREPANCY | 1706.02633#13 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 13 | Discussion. The above linear scaling rule was adopted by Krizhevsky [21], if not earlier. However, Krizhevsky re- ported a 1% increase of error when increasing the minibatch size from 128 to 1024, whereas we show how to maintain accuracy across a much broader regime of minibatch sizes. Chen et al. [5] presented a comparison of numerous dis- tributed SGD variants, and although their work also em- ployed the linear scaling rule, it did not establish a small minibatch baseline. Li [25] (§4.6) showed distributed Ima- geNet training with minibatches up to 5120 without a loss in accuracy after convergence. However, their work did not demonstrate a hyper-parameter search-free rule for adjust- ing the learning rate as a function of minibatch size, which is a central contribution of our work.
In recent work, Bottou et al. [4] (§4.2) review theoretical tradeoffs of minibatching and show that with the linear scal- ing rule, solvers follow the same training curve as a function of number of examples seen, and suggest the learning rate should not exceed a maximum rate independent of mini- batch size (which justiï¬es warmup). Our work empirically tests these theories with unprecedented minibatch sizes.
3
# 2.2. Warmup | 1706.02677#13 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 14 | 3
For the weight initialization, we propose Ï = 0 and Ï = 1 for all units in the higher layer. The next paragraphs will show the advantages of this initialization. Of course, during learning these assumptions on the weight vector will be violated. However, we can prove the self-normalizing property even for weight vectors that are not normalized, therefore, the self-normalizing property can be kept during learning and weight changes. | 1706.02515#14 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 14 | Figure 1: Architecture of Recurrent GAN and Conditional Recurrent GAN models.
3.1.1 MAXIMUM MEAN DISCREPANCY
We consider a GAN successful if it implicitly learns the distribution of the true data. We assess this by studying the samples it generates. This is the ideal setting for maximum mean discrepancy (MMD) (Gretton et al., 2007), and has been used as a training objective for generative moment matching networks (Li et al., 2015). MMD asks if two sets of samples - one from the GAN, and one from the true data distribution, for example - were generated by the same distribution. It does this by comparing statistics of the samples. In practice, we consider the squared difference of the statistics between the two sets of samples (the MMD2), and replace inner products between (functions of) the two samples by a kernel. Given a kernel K : X Ã Y â R, and samples {xi}N j=1, an unbiased estimate of MMD2 is:
non nom mm MMD,, = Wary LL Keo) - => Klein) + amy Rv) i=l jf#i " i=1 j=l i=l ffi | 1706.02633#14 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 14 | 3
# 2.2. Warmup
As we discussed, for large minibatches (e.g., 8k) the lin- ear scaling rule breaks down when the network is changing rapidly, which commonly occurs in early stages of train- ing. We ï¬nd that this issue can be alleviated by a properly designed warmup [16], namely, a strategy of using less ag- gressive learning rates at the start of training.
Constant warmup. The warmup strategy presented in [16] uses a low constant learning rate for the ï¬rst few epochs of training. As we will show in §5, we have found constant warmup particularly helpful for prototyping object detec- tion and segmentation methods [9, 31, 26, 14] that ï¬ne-tune pre-trained layers together with newly initialized layers. | 1706.02677#14 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 15 | Deriving the Mean and Variance Mapping Function g. We assume that the x; are independent from each other but share the same mean y and variance v. Of course, the independence assumptions is not fulfilled in general. We will elaborate on the independence assumption below. The network input z in the higher layer is z = w? a for which we can infer the following moments E(z) = SL, w; E(x) = wand Var(z) = Var(d>;_, w; 2;) = v7, where we used the independence of the x;. The net input z is a weighted sum of independent, but not necessarily identically distributed variables x;, for which the central limit theorem (CLT) states that z approaches a normal distribution: z~ N (ww, VT) with density py (z; ww, VT). According to the CLT, the larger n, the closer is z to a normal distribution. For Deep Learning, broad layers with hundreds of neurons x; are common. Therefore the assumption that z is normally distributed is met well for most currently used neural networks (see neue The function g maps the mean and variance of activations in the lower layer to the mean ji = E(y) and variance 7 = Var(y) of the activations y in the next layer: | 1706.02515#15 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 15 | Defining appropriate kernels between time series is an area of active research. However, much of the challenge arises from the need to align time series. In our case, the generated and real samples are already aligned by our fixing of the âtimeâ axis. We opt then to treat our time series as vectors (or matrices, in the multidimensional case) for comparisons, and use the radial basis function (RBF) kernel using the squared ¢j-norm or Frobenius norm between vectors/matrices; K(a,y) = exp(â||a â y||?/(207)). To select an appropriate kernel bandwidth o we maximise the ator of the t-statistic of the power of the MMD test between two distributions (Sutherland et al. ae : _ {= MMD | where V is the asymptotic variance of the estimator of MMD?. We do this using
# ae MMD =e
2
{= MMD | where V is the asymptotic variance of the estimator of MMD?. We do this using =e | 1706.02633#15 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 15 | In our ImageNet experiments with a large minibatch of size kn, we have tried to train with the low learning rate of η for the ï¬rst 5 epochs and then return to the target learn- ing rate of Ëη = kη. However, given a large k, we ï¬nd that this constant warmup is not sufï¬cient to solve the optimiza- tion problem, and a transition out of the low learning rate warmup phase can cause the training error to spike. This leads us to propose the following gradual warmup.
Gradual warmup. We present an alternative warmup that gradually ramps up the learning rate from a small to a large value. This ramp avoids a sudden increase of the learning rate, allowing healthy convergence at the start of training. In practice, with a large minibatch of size kn, we start from a learning rate of η and increment it by a constant amount at each iteration such that it reaches Ëη = kη after 5 epochs (re- sults are robust to the exact duration of warmup). After the warmup, we go back to the original learning rate schedule.
# 2.3. Batch Normalization with Large Minibatches | 1706.02677#15 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 16 | g: (!) ca (!) : fi(u,w,y,7) = [. selu(z) pn(z; pw, VT) dz (3) co ~ D(M,W,U,T) = | selu(z)? py(z; pw, VvT) dz â (ji)?. co
These integrals can be analytically computed and lead to following mappings of the moments:
~ 1 . pw A= a (ve) erf (5) + (4) a el+"F erfc (4) aerfe (+) ; v2 Tre tay + ps) 1 vr v= 3 ( (us? +u7) (: â erfe (45)) +a? (-20 erfc (ââ*) (5) w)2 « e2HwHtvT) orf (S =) + erfc (4+)) t owrre #5) - (it)â | 1706.02515#16 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 16 | # ae MMD =e
2
{= MMD | where V is the asymptotic variance of the estimator of MMD?. We do this using =e
a split of the validation set during training - the rest of the set is used to calculate the MMD2 using the optimised bandwidth. Following (Sutherland et al., 2016), we deï¬ne a mixed kernel as a sum of RBF kernels with two different Ïs, which we optimise simultaneously. We ï¬nd the MMD2 to be more informative than either generator or discriminator loss, and correlates well with quality as assessed by visualising.
3.1.2 TRAIN ON SYNTHETIC, TEST ON REAL (TSTR) | 1706.02633#16 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 16 | # 2.3. Batch Normalization with Large Minibatches
Batch Normalization (BN) [19] computes statistics along the minibatch dimension: this breaks the independence of each sampleâs loss, and changes in minibatch size change the underlying definition of the loss function being opti- mized. In the following we will show that a commonly used âshortcutâ, which may appear to be a practical consideration to avoid communication overhead, is actually necessary for preserving the loss function when changing minibatch size. We note that (1) and (2) assume the per-sample loss I(x, w) is independent of all other samples. This is not the case when BN is performed and activations are computed across samples. We write 1g (a, w) to denote that the loss of a single sample x depends on the statistics of all samples in its minibatch B. We denote the loss over a single minibatch B of size nas L(B,w) = + >.< le(#,w). With BN, the training set can be thought of as containing all distinct sub- sets of size n drawn from the original training set X, which we denote as X". The training loss L(w) then becomes:
L(w) = â > L(B,w). (5) x") oe, | 1706.02677#16 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 17 | Stable and Attracting Fixed Point (0,1) for Normalized Weights. We assume a normalized weight vector w with w = 0 andr = 1. Given a fixed point (1, 7), we can solve equations Eq. (a) and Eq. (5) for a and . We chose the fixed point (4,7) = (0,1), which is typical for activation normalization. We obtain the fixed point equations ji = 4. = 0 and y = v = 1 that we solve for a and \ and obtain the solutions ao; + 1.6733 and Ap * 1.0507, where the subscript 01 indicates that these are the parameters for fixed point (0, 1). The analytical expressions for a and Ao; are given in Eq. (14). We are interested whether the fixed point (1,7) = (0, 1) is stable and attracting. If the Jacobian of g has a norm smaller than | at the fixed point, then g is a contraction mapping and the fixed point is stable. The (2x2)-Jacobian 7 (1, â) of g : (1,v) +4 (ft, ) evaluated at the fixed point (0, 1) with ao; and Ao; is | 1706.02515#17 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 17 | 3.1.2 TRAIN ON SYNTHETIC, TEST ON REAL (TSTR)
We propose a novel method for evaluating the output of a GAN when a supervised task can be deï¬ned on the domain of the training data. We call it âTrain on Synthetic, Test on Realâ (TSTR). Simply put, we use a dataset generated by the GAN to train a model, which is then tested on a held-out set of true examples. This requires the generated data to have labels - we can either provide these to a conditional GAN, or use a standard GAN to generate them in addition to the data features. In this work we opted for the former, as we describe below. For using GANs to share synthetic âde-identiï¬edâ
data, this evaluation metric is ideal, because it demonstrates the ability of the synthetic data to be used for real applications. We present the pseudocode for this GAN evaluation strategy in Algorithm 1.
# Algorithm 1 (TSTR) Train on Synthetic, Test on Real
1: train, test = split(data) 2: discriminator, generator = train_GAN(train) 3: with labels from train: 4: 5: 6: 7: with labels and features from test: 8: 9: | 1706.02633#17 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 17 | L(w) = â > L(B,w). (5) x") oe,
If we view B as a âsingle sampleâ in X n, then the loss of each single sample B is computed independently.
Note that the minibatch size n over which the BN statis- tics are computed is a key component of the loss: if the per- worker minibatch sample size n is changed, it changes the underlying loss function L that is optimized. More specif- ically, the mean/variance statistics computed by BN with different n exhibit different levels of random variation.
In the case of distributed (and multi-GPU) training, if the per-worker sample size n is kept ï¬xed and the total mini- batch size is kn, it can be viewed a minibatch of k samples with each sample Bj independently selected from X n, so the underlying loss function is unchanged and is still de- ï¬ned in X n. Under this point of view, in the BN setting after seeing k minibatches Bj, (3) and (4) become:
wt+k = wt â η âL(Bj, wt+j), (6)
j<k 1 k
a wl Weyl = We - a 2, VHB). (7) jxk | 1706.02677#17 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 18 | oly) gH Gov) on ov 0.0 ei T(p,v) = ; J(0,1) = ( 6 (uy) unet(un) gur(aw) (0 = (oo 0.782648 (6) Oye re
The spectral norm of J (0, 1) (its largest singular value) is 0.7877 < 1. That means g is a contraction mapping around the ï¬xed point (0, 1) (the mapping is depicted in Figure 2). Therefore, (0, 1) is a stable ï¬xed point of the mapping g.
4
LL 2 L OMe Oe ee oe ee eee Bee EE SSS SSS NN \NAAAAN NSSSSSSS 1.3 1.4 1.5 0.00 0.05 0.10 L bw / ji MN NY RS Seem âNSN OYE oe Mme SNe eee Ah bbe oe eee eee eee: re TAA AR RRR JAAARRERRN AAA RRRAN ee AAZIAERNNN AATPPENNN ZATTPIUNNN i a, Be ae dll oat Se lie Soe ie 2s dear aanwr nnn -0.10 0.9 1.0 11 v/v S © LX) | 1706.02515#18 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 18 | synthetic = generator.generate_synthetic(labels) classifier = train_classiï¬er(synthetic, labels) If validation set available, optionally optimise GAN over classiï¬er performance.
predictions = classifier.predict(features) TSTR_score = score(predictions, labels)
Train on Real, Test on Synthetic (TRTS): Similar to the TSTR method proposed above, we can consider the reverse case, called âTrain on Real, Test on Syntheticâ (TRTS). In this approach, we use real data to train a supervised model on a set of tasks. Then, we use the RCGAN to generate a synthetic test set for evaluation. In the case (as for MNIST) where the true classiï¬er achieves high accuracy, this serves to act as an evaluation of the RCGANâs ability to generate convincing examples of the labels, and that the features it generates are realistic. Unlike the TSTR setting however, if the GAN suffers mode collapse, TRTS performance will not degrade accordingly, so we consider TSTR the more interesting evaluation.
# 4 LEARNING TO GENERATE REALISTIC SEQUENCES | 1706.02633#18 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 18 | j<k 1 k
a wl Weyl = We - a 2, VHB). (7) jxk
Following similar logic as in §2.1, we set Ëη = kη and we keep the per-worker sample size n constant when we change the number of workers k.
In this work, we use n = 32 which has performed well for a wide range of datasets and networks [19, 16]. If n is adjusted, it should be viewed as a hyper-parameter of BN, not of distributed training. We also note that the BN statis- tics should not be computed across all workers, not only for the sake of reducing communication, but also for maintain- ing the same underlying loss function being optimized.
# 3. Subtleties and Pitfalls of Distributed SGD
In practice a distributed implementation has many sub- tleties. Many common implementation errors change the deï¬nitions of hyper-parameters, leading to models that train but whose error may be higher than expected, and such is- sues can be difï¬cult to discover. While the remarks below are straightforward, they are important to consider explic- itly to faithfully implement the underlying solver. | 1706.02677#18 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02633 | 19 | # 4 LEARNING TO GENERATE REALISTIC SEQUENCES
To demonstrate the modelâs ability to generate ârealistic-lookingâ sequences in controlled environ- ments, we consider several experiments on synthetic data. In the experiments that follow, unless otherwise speciï¬ed, the synthetic data consists of sequences of length 30. We focus on the non- conditional model RGAN in this section.
4.1 SINE WAVES
The quality of generated sine waves are easily conï¬rmed by visual inspection, but by varying the amplitudes and frequencies of the real data, we can create a dataset with nonlinear variations. We generate waves with frequencies in [1.0, 5.0], amplitudes in [0.1, 0.9], and random phases between [âÏ, Ï]. The left of Figure 2a shows examples of these signals, both real and generated (although they are hard to distinguish).
We found that, despite the absence of constraints to enforce semantics in the latent space (as in (Chen et al., 2016)), we could alter the frequency and phase of generated samples by varying the latent dimensions, although the representation was not âdisentangledâ, and one dimension of the latent space inï¬uenced multiple aspects of the signal. | 1706.02633#19 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 19 | Weight decay. Weight decay is actually the outcome of the gradient of an L2-regularization term in the loss function. More formally, the per-sample loss in (1) can be written as U(a,w) = 3\lw||? + e(x,w). Here 4|\w||? is the sample- independent L2 regularization on the weights and <(zx, w) is a sample-dependent term such as the cross-entropy loss. The SGD update in (2) can be written as:
1 Wil = We â NAW â 1 > Ve(a, wr). (8) xeB
In practice, usually only the sample-dependent term > Ve(x, we) is computed by backprop; the term Aw; is computed separately and added to the aggregated gradients
4
contributed by ε(x, wt). If there is no weight decay term, there are many equivalent ways of scaling the learning rate, including scaling the term ε(x, wt). However, as can be seen from (8), in general this is not the case. We summarize these observations in the following remark: | 1706.02677#19 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 20 | Stable and Attracting Fixed Points for Unnormalized Weights. A normalized weight vector w cannot be ensured during learning. For SELU parameters α = α01 and λ = λ01, we show in the next theorem that if (Ï, Ï ) is close to (0, 1), then g still has an attracting and stable ï¬xed point that is close to (0, 1). Thus, in the general case there still exists a stable ï¬xed point which, however, depends on (Ï, Ï ). If we restrict (µ, ν, Ï, Ï ) to certain intervals, then we can show that (µ, ν) is mapped to the respective intervals. Next we present the central theorem of this paper, from which follows that SELU networks are self-normalizing under mild conditions on the weights. Theorem 1 (Stable and Attracting Fixed Points). We assume α = α01 and λ = λ01. We restrict the range of the variables to the following intervals µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.95, 1.1], that deï¬ne the | 1706.02515#20 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 20 | At this point, we tried to train a recurrent version of the Variational Autoencoder (VAE) (Kingma & Welling, 2013) with the goal of comparing its performance with the RGAN. We tried the implemen- tation proposed in (Fabius & van Amersfoort, 2014), which is arguably the most straightforward solution to implement a Recurrent Variational Autoencoder (RVAE). It consists of replacing the encoder and decoder of a VAE with RNNs, and then using the last hidden state of the encoder RNN as the encoded representation of the input sequence. After performing the reparametrization trick, the resulting encoded representation is used to initialize the hidden state of the decoder RNN. Since in this simple dataset all sequences are of the same length, we also tried an alternative approach in which the encoding of the input sequence is computed as the concatenation of all the hidden states of the encoder RNN. Using these architechtures, we were only capable of generating sine waves with inconsistent amplitudes and frequencies, with a quality clearly inferior than the ones produced by the RGAN. The source code to reproduce these experiments is included in the git repository mentioned before. We believe that this approach needs further research, specially for the task of generating | 1706.02633#20 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 20 | Remark 1: Scaling the cross-entropy loss is not equivalent to scaling the learning rate. Momentum correction. Momentum SGD is a commonly adopted modiï¬cation to the vanilla SGD in (2). A reference implementation of momentum SGD has the following form:
1 = = U(a Url = muy + n XV (x, wr) 0) Wte+1 = We â NUt4+1Here m is the momentum decay factor and u is the update tensor. A popular variant absorbs the learning rate η into the update tensor. Substituting vt for ηut in (9) yields:
1 Upp. = Mv, + 7" > VIU(x, wr) «eB Wit = We â Ve41- (10) | 1706.02677#20 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 21 | 0.1], ν â [0.8, 1.5], and Ï â [0.95, 1.1], that deï¬ne the functionsâ domain â¦. For Ï = 0 and Ï = 1, the mapping Eq. (3) has the stable ï¬xed point (µ, ν) = (0, 1), whereas for other Ï and Ï the mapping Eq. (3) has a stable and attracting ï¬xed point depending on (Ï, Ï ) in the (µ, ν)-domain: µ â [â0.03106, 0.06773] and ν â [0.80009, 1.48617]. All points within the (µ, ν)-domain converge when iteratively applying the mapping Eq. (3) to this ï¬xed point. | 1706.02515#21 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 21 | Accuracy 0.991 ± 0.001 Real TSTR 0.975 ± 0.002 0.988 ± 0.005 TRTS
Table 1: Scores obtained by a convolutional neural network when: a) trained and tested on real data, b) trained on synthetic and tested on real data, and c) trained on real and tested on synthetic. In all cases, early stopping and (in the case of the synthetic data) epoch selection were determined using a validation set.
labeled data that will be presented later in this paper, which we also failed to accomplish with the RVAE so far.
4.2 SMOOTH FUNCTIONS | 1706.02633#21 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 21 | For a fixed 77, the two are equivalent. However, we note that while u only depends on the gradients and is independent of 7, v is entangled with 7. When 7 changes, to maintain equivalence with the reference variant in (9), the update for v should be: vp41 = man up + mit SY Vi(a, w:). We refer to the factor a as the momentum correction. We found that this is especially important for stabilizing train- ing when 741 >> 7, otherwise the history term v;, is too small which leads to instability (for 741 < 7, momentum correction is less critical). This leads to our second remark: Remark 2: Apply momentum correction after changing learning rate if using (10). Gradient aggregation. For k workers each with a per- worker minibatch of size n, following (4), gradient aggre- gation must be performed over the entire set of kn examples according to 4 yj Lees, I(x, w;). Loss layers are typi- cally implemented to compute an average loss over their lo- cal input, which amounts to computing a per-worker loss of SY? U(x, w,)/n. Given this, correct aggregation requires av- eraging the k gradients | 1706.02677#21 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 22 | Proof. We provide a proof sketch (see detailed proof in Appendix Section A3). With the Banach ï¬xed point theorem we show that there exists a unique attracting and stable ï¬xed point. To this end, we have to prove that a) g is a contraction mapping and b) that the mapping stays in the domain, that is, g(â¦) â â¦. The spectral norm of the Jacobian of g can be obtained via an explicit formula for the largest singular value for a 2 à 2 matrix. g is a contraction mapping if its spectral norm is smaller than 1. We perform a computer-assisted proof to evaluate the largest singular value on a ï¬ne grid and ensure the precision of the computer evaluation by an error propagation analysis of the implemented algorithms on the according hardware. Singular values between grid points are upper bounded by the mean value theorem. To this end, we bound the derivatives of the formula for the largest singular value with respect to Ï, Ï, µ, ν. Then we apply the mean value theorem to pairs of points, where one is on the grid and the other is off the grid. This shows that for all values of Ï, Ï, µ, | 1706.02515#22 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 22 | labeled data that will be presented later in this paper, which we also failed to accomplish with the RVAE so far.
4.2 SMOOTH FUNCTIONS
Sine waves are simple signals, easily reproduced by the model. In our ultimate medical application, we wish the model to reproduce complex physiological signals which may not follow simple dynamics. We therefore consider the harder task of learning arbitrary smooth signals. Gaussian processes offer a method to sample values of such smooth functions. We use a RBF kernel with to specify a GP with zero-valued mean function. We then draw 30 equally-spaced samples. This amounts to a single draw from a multivariate normal distribution with covariance function given by the RBF kernel evaluated on a grid of equally-spaced points. In doing so, we have speciï¬ed exactly the probability distribution generated the true data, which enables us to evaluate generated samples under this distribution. The right of Figure 2a shows examples (real and generated) of this experiment. The main feature of the real and generated time series is that they exhibit smoothness with local correlations, and this is rapidly captured by the RGAN. | 1706.02633#22 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 22 | input, which amounts to computing a per-worker loss of SY? U(x, w,)/n. Given this, correct aggregation requires av- eraging the k gradients in order to recover the missing 1/k factor. However, standard communication primitives like allreduce [11] perform summing, not averaging. Therefore, it is more efficient to absorb the 1/k scaling into the loss, in which case only the lossâs gradient with respect to its in- put needs to be scaled, removing the need to scale the entire gradient vector. We summarize this as follows: | 1706.02677#22 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 23 | apply the mean value theorem to pairs of points, where one is on the grid and the other is off the grid. This shows that for all values of Ï, Ï, µ, ν in the domain â¦, the spectral norm of g is smaller than one. Therefore, g is a contraction mapping on the domain â¦. Finally, we show that the mapping g stays in the domain ⦠by deriving bounds on ˵ and Ëν. Hence, the Banach ï¬xed-point theorem holds and there exists a unique ï¬xed point in ⦠that is attained. | 1706.02515#23 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 23 | Because we have access to the data distribution, in Figure 3 we show how the average (log) likelihood of a set of generated samples increases under the data distribution during training. This is an imperfect measure, as it is blind to the diversity of the generated samples - the oft-observed mode collapse, or âHelvetica Scenarioâ (Goodfellow et al., 2014) of GANs - hence we prefer the MMD2 measure (see Figure 3). It is nonetheless encouraging to observe that, although the GAN objective is unaware of the underlying data distribution, the likelihood of the generated samples improves with training.
# 4.3 MNIST AS A TIME SERIES
The MNIST hand-written digit dataset is ubiquitous in machine learning research. Accuracy on MNIST digit classiï¬cation is high enough to consider the problem âsolvedâ, and generating MNIST digits seems an almost trivial task for traditional GANs. However, generating MNIST sequentially is less commonly done (notable examples are PixelRNN (Oord et al., 2016), and the serialisation of MNIST in the long-memory RNN literature (Le et al., 2015)). To serialise MNIST, each 28 à 28 digit forms a 784-dimensional vector, which is a sequence we can aim to generate with the RGAN. This gives the added beneï¬t of producing samples we can easily assess visually. | 1706.02633#23 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 23 | Remark 3: Normalize the per-worker loss by total minibatch size kn, not per-worker size n. We also note that it may be incorrect to âcancel kâ by setting Ëη = η (not kη) and normalizing the loss by 1/n (not 1/kn), which can lead to incorrect weight decay (see Remark 1).
Data shufï¬ing. SGD is typically analyzed as a process that samples data randomly with replacement. In practice, com- mon SGD implementations apply random shufï¬ing of the training set during each SGD epoch, which can give better results [3, 13]. To provide fair comparisons with baselines that use shufï¬ing (e.g., [16]), we ensure the samples in one epoch done by k workers are from a single consistent ran- dom shufï¬ing of the training set. To achieve this, for each epoch we use a random shufï¬ing that is partitioned into k parts, each of which is processed by one of the k workers. Failing to correctly implement random shufï¬ing in multiple workers may lead to noticeably different behavior, which may contaminate results and conclusions. In summary:
Remark 4: Use a single random shufï¬ing of the training data (per epoch) that is divided amongst all k workers. | 1706.02677#23 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 24 | Consequently, feed-forward neural networks with many units in each layer and with the SELU activation function are self-normalizing (see deï¬nition 1), which readily follows from Theorem 1. To give an intuition, the main property of SELUs is that they damp the variance for negative net inputs and increase the variance for positive net inputs. The variance damping is stronger if net inputs are further away from zero while the variance increase is stronger if net inputs are close to zero. Thus, for large variance of the activations in the lower layer the damping effect is dominant and the variance decreases in the higher layer. Vice versa, for small variance the variance increase is dominant and the variance increases in the higher layer.
However, we cannot guarantee that mean and variance remain in the domain â¦. Therefore, we next treat the case where (µ, ν) are outside â¦. It is especially crucial to consider ν because this variable has much stronger inï¬uence than µ. Mapping ν across layers to a high value corresponds to an
5 | 1706.02515#24 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 24 | To make the task more tractable and to explore the RGANâs ability to generate multivariate sequences, we treat each 28x28 image as a sequence of 28, 28-dimensional outputs. We show two types of
VOD Ve
se real MNIST ci # good RGAN samples bad RGAN samples & RAEN
# sine waves
# smooth signals
(a) Examples of real (coloured, top) and generated (black, lower two lines) samples.
_
(b) Left top: real MNIST digits. Left bottom: unrealistic digits generated at epoch 27. Right: digits with minimal distortion generated at epoch 100.
Figure 2: RGAN is capable of generating realistic-looking examples.
| 5.0 âhay ckhite ck ie, it Sao i ah alla AN 28 Dioss amin oC it Mp2 logvlikelihood epoch
Figure 3: Trace of generator (dotted), dis- criminator (solid) loss, MMD2 score and log likelihood of generated samples under the data distribution during training for RGAN generating smooth sequences (output in Fig- ure 2a.)
# Gloss
02
# oo
# distance from endpoints
+
# Sw
S/S
+ |
~/ \/~~
# LI
# pre
# Y\ N\A | 1706.02633#24 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 24 | Remark 4: Use a single random shufï¬ing of the training data (per epoch) that is divided amongst all k workers.
# 4. Communication
In order to scale beyond the 8 GPUs in a single Big Basin server [24], gradient aggregation has to span across servers on a network. To allow for near perfect linear scaling, the aggregation must be performed in parallel with backprop. This is possible because there is no data dependency be- tween gradients across layers. Therefore, as soon as the gra- dient for a layer is computed, it is aggregated across work- ers, while gradient computation for the next layer continues (as discussed in [5]). We give full details next.
# 4.1. Gradient Aggregation | 1706.02677#24 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 25 | exploding gradient, since the Jacobian of the activation of high layers with respect to activations in lower layers has large singular values. Analogously, mapping v across layers to a low value corresponds to an vanishing gradient. Bounding the mapping of v from above and below would avoid both exploding and vanishing gradients. Theorem [2]states that the variance of neuron activations of SNNs is bounded from above, and therefore ensures that SNNs learn robustly and do not suffer from exploding gradients. Theorem 2 (Decreasing v). For \ = \o1, @ = ao1 and the domainQt: -1<w<1,-O0.1<w< 0.1,3 <v < 16, and 0.8 < rT < 1.25, we have for the mapping of the variance 0(U,w,V,T, A, a) given in Eq. G): 0(p,w,v,7, X01, 01) < vThe proof can be found in the Appendix Section|A3} Thus, when mapped across many layers, the variance in the interval [3, 16] is mapped to a value below 3. Consequently, all fixed points (1, ) of the mapping g | 1706.02515#25 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 25 | # Gloss
02
# oo
# distance from endpoints
+
# Sw
S/S
+ |
~/ \/~~
# LI
# pre
# Y\ N\A
Figure 4: Back-projecting training examples into the latent space and linearly in- terpolating them produces smooth variation in the sam- ple space. Top plot shows sample-space distance from top (green, dashed) sample to bottom (orange, dotted). Distance measure is RBF kernel with bandwidth cho- sen as median pairwise dis- tance between training sam- ples. The original training examples are shown in dot- ted lines in the bottom and second-from-top plots.
experiment with this dataset. In the ï¬rst one, we train a RGAN to generate MNIST digits in this sequential manner. Figure 2b demonstrates how realistic the generated digits appear. | 1706.02633#25 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 25 | # 4.1. Gradient Aggregation
For every gradient, aggregation is done using an allre- duce operation (similar to the MPI collective operation MPI Allreduce [11]). Before allreduce starts every GPU has its locally computed gradients and after allreduce completes every GPU has the sum of all k gradients. As the number of parameters grows and compute performance of GPUs in- creases, it becomes harder to hide the cost of aggregation in the backprop phase. Training techniques to overcome these effects are beyond the scope of this work (e.g., quantized gradients [18], Block-Momentum SGD [6]). However, at the scale of this work, collective communication was not a bottleneck, as we were able to achieve near-linear SGD scaling by using an optimized allreduce implementation.
Our implementation of allreduce consists of three phases for communication within and across servers: (1) buffers from the 8 GPUs within a server are summed into a sin- gle buffer for each server, (2) the results buffers are shared and summed across all servers, and ï¬nally (3) the results are broadcast onto each GPU. For the local reduction and broadcast in phases (1) and (3) we used NVIDIA Collective Communication Library (NCCL)3 for buffers of size 256 KB or more and a simple implementation consisting of a | 1706.02677#25 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 26 | layers, the variance in the interval [3, 16] is mapped to a value below 3. Consequently, all fixed points (1, ) of the mapping g (Eq. @B) have v < 3. Analogously, Theorem)3|states that the variance of neuron activations of SNNs is bounded from below, and therefore ensures that SNNs do not suffer from vanishing gradients. Theorem 3 (Increasing v). We consider \ = Xo1, @ = Q01 and the domain Q-: â0.1< p< 0.1, and â0.1 <w < 0.1. For the domain 0.02 < v < 0.16 and 0.8 < 7 < 1.25 as well as for the domain 0.02 < v < 0.24 and 0.9 < T < 1.25, the mapping of the variance 0(,w,V,T, A, a) given in Eq. increases: V(1,wW,V,T, Ao1, @o1) > V. | 1706.02515#26 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 26 | For the second experiment, we downsample the MNIST digits to 14x14 pixels, and consider the ï¬rst three digits (0, 1, and 2). With this data we train a RCGAN and subsequently perform the TSTR (and TRTS) evaluations explained above, for the task of classifying the digits. That is, for the TSTR evaluation, we generate a synthetic dataset using the GAN, using the real training labels as input. We then train a classiï¬er (a convolutional neural network) on this data, and evaluate its performance on the real held-out test set. Conversely, for TRTS we train a classiï¬er on the real data, and evaluate it on a synthetic test dataset generated by the GAN. Results of this experiment are show in Table 1. To obtain error bars on the accuracies reported, we trained the RCGAN ï¬ve times with different random initialisations. The TSTR result shows that the RCGAN generates synthetic datasets realistic enough to train a classiï¬er which then achieves high performance on real test data. The TRTS result shows that the synthetic examples in the test set match their labels to a high degree, given the accuracy of the classiï¬er trained on real data is very high.
# 5 LEARNING TO GENERATE REALISTIC ICU DATA | 1706.02633#26 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02515 | 27 | The proof can be found in the Appendix Section All fixed points (1, ) of the mapping g (Eq. (3)) ensure for 0.8 < 7 that 7 > 0.16 and for 0.9 < 7 that 7 > 0.24. Consequently, the variance mapping Eq. (5) ensures a lower bound on the variance v. Therefore SELU networks control the variance of the activations and push it into an interval, whereafter the mean and variance move toward the fixed point. Thus, SELU networks are steadily normalizing the variance and subsequently normalizing the mean, too. In all experiments, we observed that self-normalizing neural networks push the mean and variance of activations into the domain Q .
Initialization. Since SNNs have a fixed point at zero mean and unit variance for normalized weights w= 0", w; = Oand tr = Soi, w? = 1 (see above), we initialize SNNs such that these constraints are fulfilled in expectation. We draw the weights from a Gaussian distribution with E(w;) = 0 and variance Var(w;) = 1/n. Uniform and truncated Gaussian distributions with these moments led to networks with similar behavior. The âMSRA initializationâ is similar since it uses zero mean and variance 2/n to initialize the weights [17]. The additional factor 2 counters the effect of rectified linear units. | 1706.02515#27 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 27 | # 5 LEARNING TO GENERATE REALISTIC ICU DATA
One of the main goals of this paper is to build a model capable of generating realistic medical datasets, and speciï¬cally ICU data. For this purpose, we based our work on the recently-released Philips eICU database1. This dataset was collected by the critical care telehealth program provided by Philips. It contains around 200,000 patients from 208 care units across the US, with a total of 224,026,866 entries divided in 33 tables.
From this data, we focus on generating the four most frequently recorded, regularly-sampled variables measured by bedside monitors: oxygen saturation measured by pulse oximeter (SpO2), heart rate (HR), respiratory rate (RR) and mean arterial pressure (MAP). In the eICU dataset, these variables are measured every ï¬ve minutes. To reduce the length of the sequences we consider, we downsample to one measurement every ï¬fteen minutes, taking the median value in each window. This greatly speeds up the training of our LSTM-based GAN while still capturing the relevant dynamics of the data. | 1706.02633#27 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
1706.02677 | 27 | For interserver allreduce, we implemented two of the the re- best algorithms for bandwidth-limited scenarios: cursive halving and doubling algorithm [30, 37] and the bucket algorithm (also known as the ring algorithm) [2]. For both, each server sends and receives 2 pâ1 p b bytes of data, where b is the buffer size in bytes and p is the num- ber of servers. While the halving/doubling algorithm con- sists of 2 log2(p) communication steps, the ring algorithm consists of 2(p â 1) steps. This generally makes the halv- ing/doubling algorithm faster in latency-limited scenarios (i.e., for small buffer sizes and/or large server counts). In practice, we found the halving/doubling algorithm to per- form much better than the ring algorithm for buffer sizes up to a million elements (and even higher on large server counts). On 32 servers (256 GPUs), using halving/doubling led to a speedup of 3Ã over the ring algorithm. | 1706.02677#27 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | [
{
"id": "1606.04838"
},
{
"id": "1510.08560"
},
{
"id": "1609.08144"
},
{
"id": "1609.03528"
},
{
"id": "1604.00981"
},
{
"id": "1703.06870"
}
] |
1706.02515 | 28 | New Dropout Technique. Standard dropout randomly sets an activation x to zero with probability 1 â4q for 0 < q < 1. In order to preserve the mean, the activations are scaled by 1/q during training. If z has mean E(x) = ju and variance Var(x) = v, and the dropout variable d follows a binomial distribution B(1, q), then the mean E(1/gqdz) = ps is kept. Dropout fits well to rectified linear units, since zero is in the low variance region and corresponds to the default value. For scaled exponential linear units, the default and low variance value is lim,_,., selu(z) = âAa = aâ. Therefore, we propose âalpha dropoutâ, that randomly sets inputs to aâ. The new mean and new variance is E(ad + aâ(1 â d)) = qu + (1 â q)oâ, and Var(ad + aâ(1 â d)) = q((1 â g)(aâ â 1)? +v). We aim at keeping mean and variance to their original values after âalpha dropoutâ, in order to ensure the self-normalizing property even for âalpha dropoutâ. | 1706.02515#28 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | [
{
"id": "1504.01716"
},
{
"id": "1511.07289"
},
{
"id": "1605.00982"
},
{
"id": "1607.06450"
},
{
"id": "1507.06947"
}
] |
1706.02633 | 28 | In the following experiments, we consider the beginning of the patientâs stay in the ICU, considering this a critical time in their care. We focus on the ï¬rst 4 hours of their stay, which results in 16 measurements of each variable. While medical data is typically fraught with missing values, in this work we circumvented the issue by discarding patients with missing data (after downsampling). After preprocessing the data this way, we end up with a cohort of 17,693 patients. Most restrictive was the requirement for non-missing MAP values, as these measurements are taken invasively.
# 1https://eicu-crd.mit.edu/ | 1706.02633#28 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | [
{
"id": "1511.06434"
},
{
"id": "1609.04802"
},
{
"id": "1504.00941"
},
{
"id": "1702.01983"
},
{
"id": "1703.04887"
},
{
"id": "1701.06547"
},
{
"id": "1610.05755"
},
{
"id": "1601.06759"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.