doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1710.04110 | 58 | Komal Kapoor, Mingxuan Sun, Jaideep Srivastava, and Tao Ye. A hazard based approach to user return time prediction. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD â14, pages 1719â1728, New York, NY, USA, 2014. ACM. ISBN 978-1-4503-2956-9. doi: 10.1145/2623330.2623348. URL http://doi.acm.org/10.1145/2623330.2623348.
Komal Kapoor, Vikas Kumar, Loren Terveen, Joseph A. Konstan, and Paul Schrater. "I like to explore sometimes": Adapting to dynamic user novelty preferences. In Proceedings of the 9th ACM Conference on Recommender Systems, RecSys â15, pages 19â26, New York, NY, USA, 2015. ACM. ISBN 978-1-4503-3692-5. doi: 10.1145/2792838.2800172. URL http://doi.acm.org/10.1145/2792838.2800172. | 1710.04110#58 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04110 | 59 | Junpei Komiyama and Tao Qin. Time-decaying bandits for non-stationary systems. In Tie- Yan Liu, Qi Qi, and Yinyu Ye, editors, Web and Internet Economics: 10th International Conference, WINE 2014, Beijing, China, December 14-17, 2014. Proceedings, pages 460â 466. Springer International Publishing, Cham, 2014. ISBN 978-3-319-13129-0. doi: 10.1007/ 978-3-319-13129-0_40. URL http://dx.doi.org/10.1007/978-3-319-13129-0_40.
Yehuda Koren. Collaborative ï¬ltering with temporal dynamics. Commun. ACM, 53(4):89â97, 2010.
Jan KoutnÃk, Klaus Greï¬, Faustino Gomez, and Jürgen Schmidhuber. A clockwork RNN. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ICMLâ14, pages IIâ1863âIIâ1871. JMLR.org, 2014. URL http: //dl.acm.org/citation.cfm?id=3044805.3045100. | 1710.04110#59 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04110 | 60 | Yann Lecun, Léon Bottou, Yoshua Bengio, and Patrick Haï¬ner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pages 2278â2324, 1998.
M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ ml.
Robert V. Lindsey, Jeï¬ery D. Shroyer, Harold Pashler, and Michael C. Mozer. Improving studentsâ long-term knowledge retention through personalized review. Psychological Science, 25(3):639â647, 2014.
19
Mozer, Kazakov, & Lindsey
A. J. Lockett and R. Miikkulainen. Temporal convolution machines for sequence learning. Technical report ai-09-04, Department of Computer Science, University of Texas, Austin, TX, 2009.
In M Coltheart, editor, Attention and Performance XII: The psychology of reading, pages 87â104. Erlbaum, Hillsdale, NJ, 1987.
Michael C Mozer. Induction of multiscale temporal structure. In J. E. Moody, S. J. Hanson, and R. P. Lippmann, editors, Advances in Neural Information Processing Systems 4, pages 275â282. Morgan-Kaufmann, 1992. | 1710.04110#60 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04110 | 61 | Michael C Mozer, Harold Pashler, Nicholas Cepeda, Robert V Lindsey, and Ed Vul. Predicting the optimal spacing of study: A multiscale context model of memory. In Y. Bengio, D. Schuurmans, J. D. Laï¬erty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1321â1329. Curran Associates, Inc., 2009.
Biswanath Mukherjee, L Todd Heberlein, and Karl N Levitt. Network intrusion detection. IEEE network, 8(3):26â41, 1994.
Ngoc Giang Nguyen, Vu Anh Tran, Duc Luu Ngo, Dau Phan, Favorisen Rosyking Lumbanraja, Mohammad Reza Faisal, Bahriddin Abapihi, Mamoru Kubo, Kenji Satou, et al. Dna sequence classiï¬cation by convolutional neural network. Journal of Biomedical Science and Engineering, 9(05):280, 2016.
G Palanivel and K Duraiswamy. Multiscale time series prediction for intrusion detection. American Journal of Applied Sciences, 11(8):1405â1411, 2014. | 1710.04110#61 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04110 | 62 | Tara N. Sainath, Brian Kingsbury, George Saon, Hagen Soltau, Abdel-rahman Mohamed, George Dahl, and Bhuvana Ramabhadran. Deep convolutional neural networks for large- scale speech tasks. Neural Netw., 64(C):39â48, April 2015. ISSN 0893-6080. doi: 10.1016/ j.neunet.2014.08.005. URL http://dx.doi.org/10.1016/j.neunet.2014.08.005.
Yang Song, Ali Mamdouh Elkahky, and Xiaodong He. Multi-rate deep learning for temporal recommendation. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR â16, pages 909â912, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4069-4. doi: 10.1145/2911451.2914726. URL http://doi.acm.org/10.1145/2911451.2914726. | 1710.04110#62 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04110 | 63 | Graham W. Taylor, Rob Fergus, Yann LeCun, and Christoph Bregler. Convolutional learning of spatio-temporal features. In Proceedings of the 11th European Conference on Computer Vision: Part VI, ECCVâ10, pages 140â153, Berlin, Heidelberg, 2010. Springer-Verlag. ISBN 3-642-15566-9, 978-3-642-15566-6. URL http://dl.acm.org/citation.cfm?id=1888212. 1888225.
Theano Development Team. Theano: A Python framework for fast computation of arXiv e-prints, abs/1605.02688, May 2016. URL http: mathematical expressions. //arxiv.org/abs/1605.02688.
20
Discrete Event, Continuous Time RNNs | 1710.04110#63 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04110 | 64 | 20
Discrete Event, Continuous Time RNNs
Alexander Waibel, Toshiyuki Hanazawa, Geofrey Hinton, Kiyohiro Shikano, and Kevin J. Lang. Phoneme recognition using time-delay neural networks. In Alex Waibel and Kai-Fu Lee, editors, Readings in Speech Recognition, pages 393â404. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1990. ISBN 1-55860-124-4. URL http://dl.acm.org/ citation.cfm?id=108235.108263.
Xin Wang, Roger Donaldson, Christopher Nell, Peter Gorniak, Martin Ester, and Jiajun Bu. Recommending groups to users using user-group engagement and time-dependent matrix factorization. In Proceedings of the Thirtieth AAAI Conference on Artiï¬cial Intelligence, AAAIâ16, pages 1331â1337. AAAI Press, 2016a. URL http://dl.acm.org/citation. cfm?id=3015812.3016008. | 1710.04110#64 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04110 | 65 | Yichen Wang, Bo Xie, Nan Du, and Le Song. Isotonic Hawkes processes. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICMLâ16, pages 2226â2234. JMLR.org, 2016b. URL http://dl.acm.org/ citation.cfm?id=3045390.3045625.
Chao-Yuan Wu, Amr Ahmed, Alex Beutel, Alexander J. Smola, and How Jing. Recurrent recommender networks. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, WSDM â17, pages 495â503, New York, NY, USA, 2017. ACM. ISBN 978-1-4503-4675-7. doi: 10.1145/3018661.3018689. URL http://doi.acm. org/10.1145/3018661.3018689. | 1710.04110#65 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.04110 | 66 | Sai Wu, Weichao Ren, Chengchao Yu, Gang Chen, Dongxiang Zhang, and Jingbo Zhu. Personal recommendation using deep recurrent neural networks in netease. In 32nd IEEE International Conference on Data Engineering, ICDE 2016, Helsinki, Finland, May 16-20, 2016, pages 1218â1229. IEEE Computer Society, 2016. ISBN 978-1-5090-2020-1. doi: 10.1109/ICDE.2016.7498326. URL http://dx.doi.org/10.1109/ICDE.2016.7498326.
H. Zeng, M. D. Edwards, G. Liu, and D. K. Giï¬ord. Convolutional neural network ar- chitectures for predicting dnaâprotein binding. Bioinformatics, 32:i121âi127, 2016. doi: http://doi.org/10.1093/bioinformatics/btw255.
Lingke Zeng, Xiangmin Xu, Bolun Cai, Suo Qiu, and Tong Zhang. Multi-scale convolutional neural networks for crowd counting. CoRR, abs/1702.02359, 2017. URL http://arxiv. org/abs/1702.02359. | 1710.04110#66 | Discrete Event, Continuous Time RNNs | We investigate recurrent neural network architectures for event-sequence
processing. Event sequences, characterized by discrete observations stamped
with continuous-valued times of occurrence, are challenging due to the
potentially wide dynamic range of relevant time scales as well as interactions
between time scales. We describe four forms of inductive bias that should
benefit architectures for event sequences: temporal locality, position and
scale homogeneity, and scale interdependence. We extend the popular gated
recurrent unit (GRU) architecture to incorporate these biases via intrinsic
temporal dynamics, obtaining a continuous-time GRU. The CT-GRU arises by
interpreting the gates of a GRU as selecting a time scale of memory, and the
CT-GRU generalizes the GRU by incorporating multiple time scales of memory and
performing context-dependent selection of time scales for information storage
and retrieval. Event time-stamps drive decay dynamics of the CT-GRU, whereas
they serve as generic additional inputs to the GRU. Despite the very different
manner in which the two models consider time, their performance on eleven data
sets we examined is essentially identical. Our surprising results point both to
the robustness of GRU and LSTM architectures for handling continuous time, and
to the potency of incorporating continuous dynamics into neural architectures. | http://arxiv.org/pdf/1710.04110 | Michael C. Mozer, Denis Kazakov, Robert V. Lindsey | cs.NE, cs.LG, I.2.6 | 21 pages | null | cs.NE | 20171011 | 20171011 | [] |
1710.03740 | 0 | 8 1 0 2
b e F 5 1 ] I A . s c [
3 v 0 4 7 3 0 . 0 1 7 1 : v i X r a
Published as a conference paper at ICLR 2018
# MIXED PRECISION TRAINING
# Sharan Narangâ, Gregory Diamos, Erich Elsenâ Baidu Research {sharan, gdiamos}@baidu.com
Paulius Micikeviciusâ, Jonah Alben, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu NVIDIA {pauliusm, alben, dagarcia, bginsburg, mhouston,
okuchaiev, gavenkatesh, skyw}@nvidia.com
# ABSTRACT | 1710.03740#0 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 1 | okuchaiev, gavenkatesh, skyw}@nvidia.com
# ABSTRACT
Increasing the size of a neural network typically improves accuracy but also in- creases the memory and compute requirements for training the model. We intro- duce methodology for training deep neural networks using half-precision ï¬oat- ing point numbers, without losing model accuracy or having to modify hyper- parameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmetic. Weights, activations, and gradients are stored in IEEE half- precision format. Since this format has a narrower range than single-precision we propose three techniques for preventing the loss of critical information. Firstly, we recommend maintaining a single-precision copy of weights that accumulates the gradients after each optimizer step (this copy is rounded to half-precision for the forward- and back-propagation). Secondly, we propose loss-scaling to pre- serve gradient values with small magnitudes. Thirdly, we use half-precision arith- metic that accumulates into single-precision outputs, which are converted to half- precision before storing to memory. We demonstrate that the proposed methodol- ogy works across a wide variety of tasks and modern large scale (exceeding 100 million parameters) model architectures, trained on large datasets.
# INTRODUCTION | 1710.03740#1 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 2 | # INTRODUCTION
Deep Learning has enabled progress in many different applications, ranging from image recognition (He et al., 2016a) to language modeling (Jozefowicz et al., 2016) to machine translation (Wu et al., 2016) and speech recognition (Amodei et al., 2016). Two trends have been critical to these results - increasingly large training data sets and increasingly complex models. For example, the neural network used in Hannun et al. (2014) had 11 million parameters which grew to approximately 67 million for bidirectional RNNs and further to 116 million for the latest forward only Gated Recurrent Unit (GRU) models in Amodei et al. (2016).
Larger models usually require more compute and memory resources to train. These requirements can be lowered by using reduced precision representation and arithmetic. Performance (speed) of any program, including neural network training and inference, is limited by one of three factors: arithmetic bandwidth, memory bandwidth, or latency. Reduced precision addresses two of these limiters. Memory bandwidth pressure is lowered by using fewer bits to to store the same number of values. Arithmetic time can also be lowered on processors that offer higher throughput for reduced precision math. For example, half-precision math throughput in recent GPUs is 2Ã to 8Ã higher than for single-precision. In addition to speed improvements, reduced precision formats also reduce the amount of memory required for training. | 1710.03740#2 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 3 | Modern deep learning training systems use single-precision (FP32) format. In this paper, we address the training with reduced precision while maintaining model accuracy. Speciï¬cally, we train vari# âEqual contribution â Now at Google Brain [email protected]
1
Published as a conference paper at ICLR 2018
ous neural networks using IEEE half-precision format (FP16). Since FP16 format has a narrower dynamic range than FP32, we introduce three techniques to prevent model accuracy loss: maintain- ing a master copy of weights in FP32, loss-scaling that minimizes gradient values becoming zeros, and FP16 arithmetic with accumulation in FP32. Using these techniques we demonstrate that a wide variety of network architectures and applications can be trained to match the accuracy FP32 training. Experimental results include convolutional and recurrent network architectures, trained for classiï¬cation, regression, and generative tasks. Applications include image classiï¬cation, image generation, object detection, language modeling, machine translation, and speech recognition. The proposed methodology requires no changes to models or training hyper-parameters.
# 2 RELATED WORK | 1710.03740#3 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 4 | There have been a number of publications on training Convolutional Neural Networks (CNNs) with reduced precision. Courbariaux et al. (2015) proposed training with binary weights, all other ten- sors and arithmetic were in full precision. Hubara et al. (2016a) extended that work to also binarize the activations, but gradients were stored and computed in single precision. Hubara et al. (2016b) considered quantization of weights and activations to 2, 4 and 6 bits, gradients were real numbers. Rastegari et al. (2016) binarize all tensors, including the gradients. However, all of these approaches lead to non-trivial loss of accuracy when larger CNN models were trained for ILSVRC classiï¬ca- tion task (Russakovsky et al., 2015). Zhou et al. (2016) quantize weights, activations, and gradients to different bit counts to further improve result accuracy. This still incurs some accuracy loss and requires a search over bit width conï¬gurations per network, which can be impractical for larger models. Mishra et al. improve on the top-1 accuracy achieved by prior weight and activation quan- tizations by doubling or tripling the | 1710.03740#4 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 5 | for larger models. Mishra et al. improve on the top-1 accuracy achieved by prior weight and activation quan- tizations by doubling or tripling the width of layers in popular CNNs. However, the gradients are still computed and stored in single precision, while quantized model accuracy is lower than that of the widened baseline. Gupta et al. (2015) demonstrate that 16 bit ï¬xed point representation can be used to train CNNs on MNIST and CIFAR-10 datasets without accuracy loss. It is not clear how this approach would work on the larger CNNs trained on large datasets or whether it would work for Recurrent Neural Networks (RNNs). | 1710.03740#5 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 6 | There have also been several proposals to quantize RNN training. He et al. (2016c) train quantized variants of the GRU (Cho et al., 2014) and Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells to use fewer bits for weights and activations, albeit with a small loss in accuracy. It is not clear whether their results hold for larger networks needed for larger datasets Hubara et al. (2016b) propose another approach to quantize RNNs without altering their structure. Another approach to quantize RNNs is proposed in Ott et al. (2016). They evaluate binary, ternary and exponential quantization for weights in various different RNN models trained for language modelling and speech recognition. All of these approaches leave the gradients unmodiï¬ed in single- precision and therefore the computation cost during back propagation is unchanged.
The techniques proposed in this paper are different from the above approaches in three aspects. First, all tensors and arithmetic for forward and backward passes use reduced precision, FP16 in our case. Second, no hyper-parameters (such as layer width) are adjusted. Lastly, models trained with these techniques do not incur accuracy loss when compared to single-precision baselines. We demonstrate that this technique works across a variety of applications using state-of-the-art models trained on large scale datasets.
# IMPLEMENTATION | 1710.03740#6 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 7 | # IMPLEMENTATION
We introduce the key techniques for training with FP16 while still matching the model accuracy of FP32 training session: single-precision master weights and updates, loss-scaling, and accumulating FP16 products into FP32. Results of training with these techniques are presented in Section 4.
# 3.1 FP32 MASTER COPY OF WEIGHTS
In mixed precision training, weights, activations and gradients are stored as FP16. In order to match the accuracy of the FP32 networks, an FP32 master copy of weights is maintained and updated with the weight gradient during the optimizer step. In each iteration an FP16 copy of the master weights is
2
Published as a conference paper at ICLR 2018
1 Activations ââ>| | . F6. (7) âhs, float2halt |} â> Weights ae {25> rctvations Fie Activation Grad 2224 BWD-Actv | Fie. âWeights [ , âActivation Grad ( F16 i Fie Ps Activations Weight Grad BwD-Weight "ae *otâ¢ti [ . âActivation Grad Master-Weights (F32) Weight Update Updated Master-Weights
Figure 1: Mixed precision training iteration for a layer.
used in the forward and backward pass, halving the storage and bandwidth needed by FP32 training. Figure 1 illustrates this mixed precision training process. | 1710.03740#7 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 8 | Figure 1: Mixed precision training iteration for a layer.
used in the forward and backward pass, halving the storage and bandwidth needed by FP32 training. Figure 1 illustrates this mixed precision training process.
While the need for FP32 master weights is not universal, there are two possible reasons why a number of networks require it. One explanation is that updates (weight gradients multiplied by the learning rate) become too small to be represented in FP16 - any value whose magnitude is smaller than 2â24 becomes zero in FP16. We can see in Figure 2b that approximately 5% of weight gradient values have exponents smaller than â24. These small valued gradients would become zero in the optimizer when multiplied with the learning rate and adversely affect the model accuracy. Using a single-precision copy for the updates allows us to overcome this problem and recover the accuracy. | 1710.03740#8 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 9 | Another explanation is that the ratio of the weight value to the weight update is very large. In this case, even though the weight update is representable in FP16, it could still become zero when addition operation right-shifts it to align the binary point with the weight. This can happen when the magnitude of a normalized weight value is at least 2048 times larger that of the weight update. Since FP16 has 10 bits of mantissa, the implicit bit must be right-shifted by 11 or more positions to potentially create a zero (in some cases rounding can recover the value). In cases where the ratio is larger than 2048, the implicit bit would be right-shifted by 12 or more positions. This will cause the weight update to become a zero which cannot be recovered. An even larger ratio will result in this effect for de-normalized numbers. Again, this effect can be counteracted by computing the update in FP32.
To illustrate the need for an FP32 master copy of weights, we use the Mandarin speech model (described in more detail in Section 4.3) trained on a dataset comprising of approximately 800 hours of speech data for 20 epochs. As shown in 2a, we match FP32 training results when updating an FP32 master copy of weights after FP16 forward and backward passes, while updating FP16 weights results in 80% relative accuracy loss. | 1710.03740#9 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 10 | Even though maintaining an additional copy of weights increases the memory requirements for the weights by 50% compared with single precision training, impact on overall memory usage is much smaller. For training memory consumption is dominated by activations, due to larger batch sizes and activations of each layer being saved for reuse in the back-propagation pass. Since activations are also stored in half-precision format, the overall memory consumption for training deep neural networks is roughly halved.
3.2 LOSS SCALING
FP16 exponent bias centers the range of normalized value exponents to [â14, 15] while gradient values in practice tend to be dominated by small magnitudes (negative exponents). For example, consider Figure 3 showing the histogram of activation gradient values, collected across all layers during FP32 training of Multibox SSD detector network (Liu et al., 2015a). Note that much of the FP16 representable range was left unused, while many values were below the minimum repre- sentable range and became zeros. Scaling up the gradients will shift them to occupy more of the representable range and preserve values that are otherwise lost to zeros. This particular network diverges when gradients are not scaled, but scaling them by a factor of 8 (increasing the exponents by 3) is sufï¬cient to match the accuracy achieved with FP32 training. This suggests that activation
3
Published as a conference paper at ICLR 2018 | 1710.03740#10 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 11 | 3
Published as a conference paper at ICLR 2018
30 baseline train baseline devo â mixed_precision_fp32_weights_copy_train © mixed _precision {p32_weights copy devo â mixed_precision_no_FP32_weights_copy_train mixed_precision_no_FP32_weights_copy_dev0 25 20 train cost, 0 0 3 70 we 20 Epoch number
Weight Gradient 25.0% 20.0% Become zero in EPI6 15.0% 10.0% | Percentage of total gradients 0% =A 30 =20 10 0 Exponent value
30 Weight Gradient 25.0% baseline train baseline devo â mixed_precision_fp32_weights_copy_train © mixed _precision {p32_weights copy devo â mixed_precision_no_FP32_weights_copy_train mixed_precision_no_FP32_weights_copy_dev0 25 20.0% Become zero in EPI6 20 15.0% train cost, 10.0% | Percentage of total gradients 0 0 3 70 we 20 0% Epoch number =A 30 =20 10 0 Exponent value validation for Mandarin
(a) Training and validation (dev0) curves for Mandarin speech recognition model
(b) Gradient histogram for Mandarin training run | 1710.03740#11 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 12 | (a) Training and validation (dev0) curves for Mandarin speech recognition model
(b) Gradient histogram for Mandarin training run
Figure 2: Figure 2a shows the results of three experiemnts; baseline (FP32), pseudo FP16 with FP32 master copy, pseudo FP16 without FP32 master copy. Figure 2b shows the histogram for the exponents of weight gradients for Mandarin speech recognition training with FP32 weights. The gradients are sampled every 4,000 iterations during training for all the layers in the model.
o4 FP16 Representable range Become zero in FP16 FP16 denorms 2 v4 vs vis V32 1/64 1/128 Percentage of all activation gradient values 256 u512 © 75-60-45 -40 -38 96-94-22 -30 -28-26-24-22-20-18-16-14 12-10-86 4-2 0 2 46 6 10 124415 Jogs(magnitude)
Figure 3: Histogram of activation gradient values during the training of Multibox SSD network. Note that the bins on the x-axis cover varying ranges and thereâs a separate bin for zeros. For example, 2% of the values are in the [2â34, 2â32) range, 2% of values are in the [2â24, 2â23) range, and 67% of values are zero. | 1710.03740#12 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 13 | gradient values below 2â27 in magnitude were irrelevant to the training of this model, but values in the [2â27, 2â24) range were important to preserve.
One efï¬cient way to shift the gradient values into FP16-representable range is to scale the loss value computed in the forward pass, prior to starting back-propagation. By chain rule back-propagation ensures that all the gradient values are scaled by the same amount. This requires no extra operations during back-propagation and keeps the relevant gradient values from becoming zeros. Weight gradi- ents must be unscaled before weight update to maintain the update magnitudes as in FP32 training. It is simplest to perform this unscaling right after the backward pass but before gradient clipping or any other gradient-related computations, ensuring that no hyper-parameters (such as gradient clipping threshold, weight decay, etc.) have to be adjusted.
There are several options to choose the loss scaling factor. The simplest one is to pick a con- stant scaling factor. We trained a variety of networks with scaling factors ranging from 8 to 32K (many networks did not require a scaling factor). A constant scaling factor can be chosen empir4
Published as a conference paper at ICLR 2018 | 1710.03740#13 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 14 | Published as a conference paper at ICLR 2018
ically or, if gradient statistics are available, directly by choosing a factor so that its product with the maximum absolute gradient value is below 65,504 (the maximum value representable in FP16). There is no downside to choosing a large scaling factor as long as it does not cause overï¬ow during back-propagation - overï¬ows will result in inï¬nities and NaNs in the weight gradients which will irreversibly damage the weights after an update. Note that overï¬ows can be efï¬ciently detected by inspecting the computed weight gradients, for example, when weight gradient values are unscaled. One option is to skip the weight update when an overï¬ow is detected and simply move on to the next iteration.
# 3.3 ARITHMETIC PRECISION | 1710.03740#14 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 15 | # 3.3 ARITHMETIC PRECISION
By and large neural network arithmetic falls into three categories: vector dot-products, reductions, and point-wise operations. These categories beneï¬t from different treatment when it comes to re- duced precision arithmetic. To maintain model accuracy, we found that some networks require that FP16 vector dot-product accumulates the partial products into an FP32 value, which is converted to FP16 before writing to memory. Without this accumulation in FP32, some FP16 models did not match the accuracy of the baseline models. Whereas previous GPUs supported only FP16 multiply- add operation, NVIDIA Volta GPUs introduce Tensor Cores that multiply FP16 input matrices and accumulate products into either FP16 or FP32 outputs (NVIDIA, 2017).
Large reductions (sums across elements of a vector) should be carried out in FP32. Such reductions mostly come up in batch-normalization layers when accumulating statistics and softmax layers. Both of the layer types in our implementations still read and write FP16 tensors from memory, performing the arithmetic in FP32. This did not slow down the training process since these layers are memory-bandwidth limited and not sensitive to arithmetic speed.
Point-wise operations, such as non-linearities and element-wise matrix products, are memory- bandwidth limited. Since arithmetic precision does not impact the speed of these operations, either FP16 or FP32 math can be used. | 1710.03740#15 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 16 | # 4 RESULTS
We have run experiments for a variety of deep learning tasks covering a wide range of deep learning models. We conducted the following experiments for each application:
⢠Baseline (FP32) : Single-precision storage is used for activations, weights and gradients. All arithmetic is also in FP32.
⢠Mixed Precision (MP): FP16 is used for storage and arithmetic. Weights, activations and gradients are stored using in FP16, an FP32 master copy of weights is used for updates. Loss-scaling is used for some applications. Experiments with FP16 arithmetic used Tensor Core operations with accumulation into FP32 for convolutions, fully-connected layers, and matrix multiplies in recurrent layers.
The Baseline experiments were conducted on NVIDIAâs Maxwell or Pascal GPU. Mixed Precision experiments were conducted on Volta V100 that accumulates FP16 products into FP32. The mixed precision speech recognition experiments (Section 4.3) were conducted using Maxwell GPUs using FP16 storage only. This setup allows us to emulate the TensorCore operations on non-Volta hard- ware. A number of networks were trained in this mode to conï¬rm that resulting model accuracies are equivalent to MP training run on Volta V100 GPUs. This is intuitive since MP arithmetic was accumulating FP16 products into FP32 before converting the result to FP16 on a memory write. | 1710.03740#16 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 17 | 4.1 CNNS FOR ILSVRC CLASSIFICATION
We trained several CNNs for ILSVRC classiï¬cation task (Russakovsky et al., 2015) using mixed precision: Alexnet, VGG-D, GoogLeNet, Inception v2, Inception v3, and pre-activation Resnet-50. In all of these cases we were able to match the top-1 accuracy of baseline FP32 training session using identical hyper-parameters. Networks were trained using Caffe (Jia et al., 2014) framework modiï¬ed to use Volta TensorOps, except for Resnet50 which used PyTorch (Paszke et al., 2017).
5
Published as a conference paper at ICLR 2018
Training schedules were used from public repositories, when available (training schedule for VGG- D has not been published). Top-1 accuracy on ILSVRC validation set are shown in Table 1. Baseline (FP32) accuracy in a few cases is different from published results due to single-crop testing and a simpler data augmentation. Our data augmentation in Caffe included random horizontal ï¬ipping and random cropping from 256x256 images, Resnet50 training in PyTorch used the full augmentation in the training script from PyTorch vision repository.
Table 1: ILSVRC12 classiï¬cation top-1 accuracy. | 1710.03740#17 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 18 | Table 1: ILSVRC12 classiï¬cation top-1 accuracy.
Model AlexNet VGG-D GoogLeNet (Inception v1) Inception v2 Inception v3 Resnet50 Baseline Mixed Precision 56.77% 65.40% 68.33% 70.03% 73.85% 75.92% 56.93% 65.43% 68.43% 70.02% 74.13% 76.04% Reference (Krizhevsky et al., 2012) (Simonyan and Zisserman, 2014) (Szegedy et al., 2015) (Ioffe and Szegedy, 2015) (Szegedy et al., 2016) (He et al., 2016b)
Loss-scaling technique was not required for successful mixed precision training of these networks. While all tensors in the forward and backward passes were in FP16, a master copy of weights was updated in FP32 as outlined in Section 3.1.
4.2 DETECTION CNNS | 1710.03740#18 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 19 | 4.2 DETECTION CNNS
Object detection is a regression task, where bounding box coordinate values are predicted by the network (compared to classiï¬cation, where the predicted values are passed through a softmax layer to convert them to probabilities). Object detectors also have a classiï¬cation component, where prob- abilities for an object type are predicted for each bounding box. We trained two popular detection approaches: Faster-RCNN (Ren et al., 2015) and Multibox-SSD (Liu et al., 2015a). Both detectors used VGG-16 network as the backbone. Models and training scripts were from public repositories (Girshick; Liu). Mean average precision (mAP) was computed on Pascal VOC 2007 test set. Faster- RCNN was trained on VOC 2007 training set, whereas SSD was trained on a union of VOC 2007 and 2012 data, which is the reason behind baseline mAP difference in Table 2.
Table 2: Detection network average mean precision.
Model Faster R-CNN Multibox SSD Baseline MP without loss-scale MP with loss-scale 69.1% 76.9% 68.6% diverges 69.7% 77.1% | 1710.03740#19 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 21 | 4.3 SPEECH RECOGNITION
We explore mixed precision training for speech data using the DeepSpeech 2 model for both English and Mandarin datasets. The model used for training on the English dataset consists of two 2D con- volution layers, three recurrent layers with GRU cells, 1 row convolution layer and Connectionist temporal classiï¬cation (CTC) cost layer (Graves et al., 2006). It has approximately 115 million pa- rameters. This model is trained on our internal dataset consisting of 6000 hours of English speech. The Mandarin model has a similar architecture with a total of 215 million parameters. The Man- darin model was trained on 2600 hours of our internal training set. For these models, we run the Baseline and Pseudo FP16 experiments. All the models were trained for 20 epochs using Nesterov Stochastic Gradient Descent (SGD). All hyper-parameters such as learning rate, annealing schedule and momentum were the same for baseline and pseudo FP16 experiments. Table 3 shows the results of these experiments on independent test sets.
6
Published as a conference paper at ICLR 2018
Table 3: Character Error Rate (CER) using mixed precision training for speech recognition. English results are reported on the WSJ â92 test set. Mandarin results are reported on our internal test set.
Model/Dataset Baseline Mixed Precision English Mandarin | 1710.03740#21 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 22 | Model/Dataset Baseline Mixed Precision English Mandarin
Similar to classiï¬cation and detection networks, mixed precision training works well for recurrent neural networks trained on large scale speech datasets. These speech models are the largest models trained using this technique. Also, the number of time-steps involved in training a speech model are unusually large compared to other applications using recurrent layers. As shown in table 3, Pseudo FP16 results are roughly 5 to 10% better than the baseline. This suggests that the half-precision storage format may act as a regularizer during training.
f \ or ' sosaiet 1 âMibed precision, loss-scale 1024, \ -bedtn arse âraining Perplesty Ok 100K 200K «300K «400K © SCOK âGODK 700K «BOK «OK 1, 000K erations 2.40 â900K 820K BAOK ©â«8GK «IDK SDOK. «OK © OHOK BOK SEK 1,000
Figure 4: English to French translation network training perplexity, 3x1024 LSTM model with attention. Ref1, ref2 and ref3 represent three different FP32 training runs.
4.4 MACHINE TRANSLATION | 1710.03740#22 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 23 | 4.4 MACHINE TRANSLATION
For language translation we trained several variants of the model in TensorFlow tutorial for En- glish to French translation (Google). The model used word-vocabularies, 100K and 40K entries for English and French, respectively. The networks we trained had 3 or 5 layers in the encoder and decoder, each. In both cases a layer consisted of 1024 LSTM cells. SGD optimizer was used to train on WMT15 dataset. There was a noticeable variation in accuracy of different training sessions with the same settings. For example, see the three FP32 curves in Figure 4, which shows the 3-layer model. Mixed-precision with loss-scaling matched the FP32 results, while no loss-scaling resulted in a slight degradation in the results. The 5-layer model exhibited the same training behavior.
4.5 LANGUAGE MODELING | 1710.03740#23 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 24 | 4.5 LANGUAGE MODELING
We trained English language model, designated as bigLSTM (Jozefowicz et al., 2016), on the 1 billion word dataset. The model consists of two layers of 8192 LSTM cells with projection to a 1024-dimensional embedding. This model was trained for 50 epochs using the Adagrad optimizer. The the vocabulary size is 793K words. During training, we use a sampled softmax layer with 8K negative samples. Batch size aggregated over 4 GPUs is 1024. To match FP32 perplexity training this network with FP16 requires loss-scaling, as shown in Figure 5. Without loss scaling the training perplexity curve for FP16 training diverges, compared with the FP32 training, after 300K iterations. Scaling factor of 128 recovers all the relevant gradient values and the accuracy of FP16 training matches the baseline run.
# 4.6 DCGAN RESULTS
Generative Adversarial Networks (GANs) combine regression and discrimination tasks during train- ing. For image tasks, the generator network regresses pixel colors. In our case, the generator predicts three channels of 8-bit color values each. The network was trained to generate 128x128 pixel im- ages of faces, using DCGAN methodology (Radford et al., 2015) and CelebFaces dataset (Liu et al.,
7
Published as a conference paper at ICLR 2018 | 1710.03740#24 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 26 | Figure 6: An uncurated set of face images generated by DCGAN. FP32 training (left) and mixed- precision training (right).
2015b). The generator had 7 layers of fractionally-strided convolutions, 6 with leaky ReLU activa- tions, 1 with tanh. The discriminator had 6 convolutions, and 2 fully-connected layers. All used leaky ReLU activations except for the last layer, which used sigmoid. Batch normalization was ap- plied to all layers except the last fully-connected layer of the discriminator. Adam optimizer was used to train for 100K iterations. An set of output images in Figure 6. Note that we show a randomly selected set of output images, whereas GAN publications typically show a curated set of outputs by excluding poor examples. Unlike other networks covered in this paper, GANs do not have a widely- accepted quantiï¬cation of their result quality. Qualitatively the outputs of FP32 and mixed-precision training appear comparable. This network did not require loss-scaling to match FP32 results.
# 5 CONCLUSIONS AND FUTURE WORK | 1710.03740#26 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 27 | # 5 CONCLUSIONS AND FUTURE WORK
Mixed precision training is an important technique that allows us to reduce the memory consump- tion as well as time spent in memory and arithmetic operations of deep neural networks. We have demonstrated that many different deep learning models can be trained using this technique with no loss in accuracy without any hyper-parameter tuning. For certain models with a large number of small gradient values, we introduce the gradient scaling method to help them converge to the same accuracy as FP32 baseline models.
DNN operations benchmarked with DeepBench1 on Volta GPU see 2-6x speedups compared to FP32 implementations if they are limited by memory or arithmetic bandwidth. Speedups are lower when operations are latency-limited. Full network training and inference speedups depend on library
# 1https://github.com/baidu-research/DeepBench
8
Published as a conference paper at ICLR 2018
and framework optimizations for mixed precision and are a focus of future work (experiments in this paper were carried out with early versions of both libraries and frameworks).
We would also like to extend this work to include generative models like text-to-speech systems and deep reinforcement learning applications. Furthermore, automating loss-scaling factor selection would further simplify training with mixed precision. Loss-scaling factor could be dynamically increased or decreased by inspecting the weight gradients for overï¬ow, skipping weight updates when an overï¬ow is detected. | 1710.03740#27 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 28 | 9
Published as a conference paper at ICLR 2018
REFERENCES
D. Amodei, R. Anubhai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, J. Chen, M. Chrzanowski, A. Coates, G. Diamos, et al. Deep speech 2: End-to-end speech recognition in english and In Proceedings of The 33rd International Conference on Machine Learning, pages mandarin. 173â182, 2016.
K. Cho, B. Van Merri¨enboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. | 1710.03740#28 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 29 | M. Courbariaux, Y. Bengio, and J.-P. David. Binaryconnect: Training deep neural networks with binary weights during propagations. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 3123â3131. Curran Associates, Inc., 2015. URL http://papers.nips.cc/paper/ 5647-binaryconnect-training-deep-neural-networks-with-binary-weights-during-propagations. pdf.
R. Girshick. Faster r-cnn github repository. https://github.com/rbgirshick/ py-faster-rcnn.
Google. Tensorï¬ow tutorial: Sequence-to-sequence models. URL https://www. tensorflow.org/tutorials/seq2seq.
A. Graves, S. Fern´andez, F. Gomez, and J. Schmidhuber. Connectionist temporal classiï¬cation: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369â376. ACM, 2006. | 1710.03740#29 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 30 | S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan. Deep learning with limited numerical precision. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1737â1746, 2015.
A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sen- gupta, A. Coates, et al. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770â778, 2016a.
K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV, 2016b.
Q. He, H. Wen, S. Zhou, Y. Wu, C. Yao, X. Zhou, and Y. Zou. Effective quantization methods for recurrent neural networks. arXiv preprint arXiv:1611.10176, 2016c. | 1710.03740#30 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 31 | S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735â1780, Nov. 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL http://dx.doi.org/10. 1162/neco.1997.9.8.1735.
I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks. In Advances in Neural Information Processing Systems, pages 4107â4115, 2016a.
I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Quantized neural net- works: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016b. | 1710.03740#31 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 32 | S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reduc- In F. R. Bach and D. M. Blei, editors, ICML, volume 37 of ing internal covariate shift. JMLR Workshop and Conference Proceedings, pages 448â456. JMLR.org, 2015. URL http: //dblp.uni-trier.de/db/conf/icml/icml2015.html#IoffeS15.
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
10
Published as a conference paper at ICLR 2018
R. Jozefowicz, O. Vinyals, M. Schuster, N. Shazeer, and Y. Wu. Exploring the limits of language modeling, 2016. URL https://arxiv.org/pdf/1602.02410.pdf. | 1710.03740#32 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 33 | lutional neural networks. berger, editors, Advances in Neural 1105. Curran Associates, 4824-imagenet-classification-with-deep-convolutional-neural-networks. pdf.
# W. Liu. Ssd github repository. https://github.com/weiliu89/caffe/tree/ssd.
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. E. Reed. Ssd: Single shot multibox detec- tor. CoRR, abs/1512.02325, 2015a. URL http://dblp.uni-trier.de/db/journals/ corr/corr1512.html#LiuAESR15.
Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015b.
A. Mishra, E. Nurvitadhi, J. Cook, and D. Marr. Wrpn: Wide reduced-precision networks. arXiv preprint arXiv:1709.01134, year=2017. | 1710.03740#33 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 34 | NVIDIA. Nvidia tesla v100 gpu architecture. https://images.nvidia.com/content/ volta-architecture/pdf/Volta-Architecture-Whitepaper-v1.0.pdf, 2017.
J. Ott, Z. Lin, Y. Zhang, S.-C. Liu, and Y. Bengio. Recurrent neural networks with limited numerical precision. arXiv preprint arXiv:1608.06902, 2016.
A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. 2017.
A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolu- tional generative adversarial networks. CoRR, abs/1511.06434, 2015. URL http://dblp. uni-trier.de/db/journals/corr/corr1511.html#RadfordMC15. | 1710.03740#34 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 35 | M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks, pages 525â542. Springer International Publishing, Cham, 2016. ISBN 978-3-319-46493-0. doi: 10.1007/978-3-319-46493-0 32. URL https://doi. org/10.1007/978-3-319-46493-0_32.
S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Neural Information Processing Systems (NIPS), 2015.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Chal- lenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015. doi: 10.1007/ s11263-015-0816-y. | 1710.03740#35 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.03740 | 36 | K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recogni- tion. arXiv preprint arXiv:1409.1556, 2014.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Ra- binovich. Going deeper with convolutions. In Computer Vision and Pattern Recognition (CVPR), 2015. URL http://arxiv.org/abs/1409.4842.
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architec- ture for computer vision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al. Googleâs neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
11
Published as a conference paper at ICLR 2018 | 1710.03740#36 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | [
{
"id": "1709.01134"
},
{
"id": "1609.07061"
},
{
"id": "1608.06902"
},
{
"id": "1609.08144"
},
{
"id": "1611.10176"
}
] |
1710.02298 | 0 | 7 1 0 2
t c O 6 ] I A . s c [
1 v 8 9 2 2 0 . 0 1 7 1 : v i X r a
# Rainbow: Combining Improvements in Deep Reinforcement Learning
# Matteo Hessel DeepMind
# Joseph Modayil DeepMind
# Hado van Hasselt DeepMind
# Hado van Hasselt
# Tom Schaul DeepMind
# Georg Ostrovski DeepMind
# Will Dabney DeepMind
Dan Horgan DeepMind
# Bilal Piot DeepMind
# Mohammad Azar DeepMind
# David Silver DeepMind
# Abstract
The deep reinforcement learning community has made sev- eral independent improvements to the DQN algorithm. How- ever, it is unclear which of these extensions are complemen- tary and can be fruitfully combined. This paper examines six extensions to the DQN algorithm and empirically studies their combination. Our experiments show that the combina- tion provides state-of-the-art performance on the Atari 2600 benchmark, both in terms of data efï¬ciency and ï¬nal perfor- mance. We also provide results from a detailed ablation study that shows the contribution of each component to overall per- formance. | 1710.02298#0 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 1 | Introduction The many recent successes in scaling reinforcement learn- ing (RL) to complex sequential decision-making problems were kick-started by the Deep Q-Networks algorithm (DQN; Mnih et al. 2013, 2015). Its combination of Q-learning with convolutional neural networks and experience replay en- abled it to learn, from raw pixels, how to play many Atari games at human-level performance. Since then, many exten- sions have been proposed that enhance its speed or stability. Double DQN (DDQN; van Hasselt, Guez, and Silver 2016) addresses an overestimation bias of Q-learning (van Hasselt 2010), by decoupling selection and evaluation of the bootstrap action. Prioritized experience replay (Schaul et al. 2015) improves data efï¬ciency, by replaying more of- ten transitions from which there is more to learn. The du- eling network architecture (Wang et al. 2016) helps to gen- eralize across actions by separately representing state val- ues and action advantages. Learning from multi-step boot- strap targets (Sutton 1988; Sutton and Barto 1998), as used in A3C (Mnih et al. 2016), shifts the | 1710.02298#1 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 2 | from multi-step boot- strap targets (Sutton 1988; Sutton and Barto 1998), as used in A3C (Mnih et al. 2016), shifts the bias-variance trade- off and helps to propagate newly observed rewards faster to earlier visited states. Distributional Q-learning (Bellemare, Dabney, and Munos 2017) learns a categorical distribution of discounted returns, instead of estimating the mean. Noisy DQN (Fortunato et al. 2017) uses stochastic network layers for exploration. This list is, of course, far from exhaustive. | 1710.02298#2 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 3 | Each of these algorithms enables substantial performance improvements in isolation. Since they do so by addressing
Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
DON â DDQN â Prioritized DDQN â Dueling DDQN 200%- __ a3c â Distributional DON â Noisy DON Rainbow 100% Median human-normalized score c L 7 44 100 200 Millions of frames
Figure 1: Median human-normalized performance across 57 Atari games. We compare our integrated agent (rainbow- colored) to DQN (grey) and six published baselines. Note that we match DQNâs best performance after 7M frames, surpass any baseline within 44M frames, and reach sub- stantially improved ï¬nal performance. Curves are smoothed with a moving average over 5 points. | 1710.02298#3 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 4 | radically different issues, and since they build on a shared framework, they could plausibly be combined. In some cases this has been done: Prioritized DDQN and Dueling DDQN both use double Q-learning, and Dueling DDQN was also combined with prioritized experience replay. In this paper we propose to study an agent that combines all the afore- mentioned ingredients. We show how these different ideas can be integrated, and that they are indeed largely com- plementary. In fact, their combination results in new state- of-the-art results on the benchmark suite of 57 Atari 2600 games from the Arcade Learning Environment (Bellemare et al. 2013), both in terms of data efï¬ciency and of ï¬nal perfor- mance. Finally we show results from ablation studies to help understand the contributions of the different components.
Background Reinforcement learning addresses the problem of an agent learning to act in an environment in order to maximize a scalar reward signal. No direct supervision is provided to the agent, for instance it is never directly told the best action. | 1710.02298#4 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 5 | Agents and environments. At each discrete time step t = 0,1,2..., the environment provides the agent with an ob- servation S;, the agent responds by selecting an action A;, and then the environment provides the next reward Ry,+1, discount 7,41, and state S;41. This interaction is formalized as a Markov Decision Process, or MDP, which is a tuple (S,A,T,r,7), where S is a finite set of states, A is a finite set of actions, T(s,a,sâ) = P[Si41 = s' | S; = 8, A; = a] is the (stochastic) transition function, r(s,a) = E[Ri41 | S; = s, A, = aj is the reward function, and y ⬠[0, 1] is a discount factor. In our experiments MDPs will be episodic with a constant 7, = y, except on episode termination where 7 = 0, but the algorithms are expressed in the general form. On the agent side, action selection is given by a policy 7 that defines a probability distribution over actions for each state. From the state S; encountered at time t, we define the | 1710.02298#5 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 6 | side, action selection is given by a policy 7 that defines a probability distribution over actions for each state. From the state S; encountered at time t, we define the discounted return G; = Yeo oy) Regeas as the dis- counted sum of future rewards collected by the agent, where the discount for a reward k steps in the future is given by the product of discounts before that time, 4? = Th, VW+i- An agent aims to maximize the expected discounted return by finding a good policy. | 1710.02298#6 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 7 | The policy may be learned directly, or it may be con- structed as a function of some other learned quantities. In value-based reinforcement learning, the agent learns an es- timate of the expected discounted return, or value, when following a policy 7 starting from a given state, v"(s) = E,,[G,|S; = s], or state-action pair, g7(s,a) = E,[G,|S, = 8, Ay = a]. A common way of deriving a new policy from a state-action value function is to act e-greedily with respect to the action values. This corresponds to taking the action with the highest value (the greedy action) with probability (1âe), and to otherwise act uniformly at random with probability â¬. Policies of this kind are used to introduce a form of explo- ration: by randomly selecting actions that are sub-optimal according to its current estimates, the agent can discover and correct its estimates when appropriate. The main limitation is that it is difficult to discover alternative courses of action that extend far into the future; this has motivated research on more directed forms of exploration. | 1710.02298#7 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 8 | Deep reinforcement learning and DQN. Large state and/or action spaces make it intractable to learn Q value estimates for each state and action pair independently. In deep reinforcement learning, we represent the various com- ponents of agents, such as policies Ï(s, a) or values q(s, a), with deep (i.e., multi-layer) neural networks. The parameters of these networks are trained by gradient descent to mini- mize some suitable loss function.
In DQN (Mnih et al. 2015) deep networks and reinforce- ment learning were successfully combined by using a con- volutional neural net to approximate the action values for a
given state S; (which is fed as input to the network in the form of a stack of raw pixel frames). At each step, based on the current state, the agent selects an action e-greedily with respect to the action values, and adds a transition (St, At, Rei, Ye+1, 5:41) to a replay memory buffer (Lin 1992), that holds the last million transitions. The parame- ters of the neural network are optimized by using stochastic gradient descent to minimize the loss
(Rega + Ve41 max dy(St41, aâ) â q9(S;,A1))?, (A) | 1710.02298#8 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 9 | (Rega + Ve41 max dy(St41, aâ) â q9(S;,A1))?, (A)
where t is a time step randomly picked from the replay memory. The gradient of the loss is back-propagated only into the parameters θ of the online network (which is also used to select actions); the term θ represents the parame- ters of a target network; a periodic copy of the online net- work which is not directly optimized. The optimization is performed using RMSprop (Tieleman and Hinton 2012), a variant of stochastic gradient descent, on mini-batches sam- pled uniformly from the experience replay. This means that in the loss above, the time index t will be a random time in- dex from the last million transitions, rather than the current time. The use of experience replay and target networks en- ables relatively stable learning of Q values, and led to super- human performance on several Atari games. | 1710.02298#9 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 10 | Extensions to DQN DQN has been an important milestone, but several limita- tions of this algorithm are now known, and many extensions have been proposed. We propose a selection of six exten- sions that each have addressed a limitation and improved overall performance. To keep the size of the selection man- ageable, we picked a set of extensions that address distinct concerns (e.g., just one of the many addressing exploration).
Double Q-learning. Conventional Q-learning is affected by an overestimation bias, due to the maximization step in Equation 1, and this can harm learning. Double Q-learning (van Hasselt 2010), addresses this overestimation by decou- pling, in the maximization performed for the bootstrap tar- get, the selection of the action from its evaluation. It is pos- sible to effectively combine this with DQN (van Hasselt, Guez, and Silver 2016), using the loss (Rt+1 +γt+1qθ(St+1, argmax
(Resi tyes (S41, argmax qo (S11, aâ))âqo(St, At))?.
This change was shown to reduce harmful overestimations that were present for DQN, thereby improving performance. | 1710.02298#10 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 11 | This change was shown to reduce harmful overestimations that were present for DQN, thereby improving performance.
Prioritized replay. DQN samples uniformly from the re- play buffer. Ideally, we want to sample more frequently those transitions from which there is much to learn. As a proxy for learning potential, prioritized experience replay (Schaul et al. 2015) samples transitions with probability pt relative to the last encountered absolute TD error:
w pe X | Rega + Ve41 max g(Si41, 4â) â qo(St,Ar)]
where Ï is a hyper-parameter that determines the shape of the distribution. New transitions are inserted into the replay
buffer with maximum priority, providing a bias towards re- cent transitions. Note that stochastic transitions might also be favoured, even when there is little left to learn about them.
Dueling networks. The dueling network is a neural net- work architecture designed for value based RL. It fea- tures two streams of computation, the value and advantage streams, sharing a convolutional encoder, and merged by a special aggregator (Wang et al. 2016). This corresponds to the following factorization of action values: y
y au(s.a) = v9(fe(s)) + aplfo(s)va) â Me wel) .0!) actions | 1710.02298#11 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 12 | y au(s.a) = v9(fe(s)) + aplfo(s)va) â Me wel) .0!) actions
where ξ, η, and Ï are, respectively, the parameters of the shared encoder fξ, of the value stream vη, and of the advan- tage stream aÏ; and θ = {ξ, η, Ï} is their concatenation.
Multi-step learning. Q-learning accumulates a single re- ward and then uses the greedy action at the next step to boot- strap. Alternatively, forward-view multi-step targets can be used (Sutton 1988). We deï¬ne the truncated n-step return from a given state St as
n-1 RM = > at? Rises - (2) k=0
A multi-step variant of DQN is then deï¬ned by minimizing the alternative loss, t + γ(n) (R(n)
(Ry +94" max gg(Si4n,aâ) â qo(St, As).
Multi-step targets with suitably tuned n often lead to faster learning (Sutton and Barto 1998). | 1710.02298#12 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 13 | Multi-step targets with suitably tuned n often lead to faster learning (Sutton and Barto 1998).
Distributional RL. We can learn to approximate the dis- tribution of returns instead of the expected return. Recently Bellemare, Dabney, and Munos (2017) proposed to model such distributions with probability masses placed on a dis- crete support z, where z is a vector with Natoms â N+ atoms, deï¬ned by zi = vmin + (i â 1) vmaxâvmin for Natomsâ1 i â {1, . . . , Natoms}. The approximating distribution dt at time t is deï¬ned on this support, with the probability mass pi θ(St, At) on each atom i, such that dt = (z, pθ(St, At)). The goal is to update θ such that this distribution closely matches the actual distribution of returns. | 1710.02298#13 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 14 | To learn the probability masses, the key insight is that return distributions satisfy a variant of Bellmanâs equation. For a given state S; and action A;, the distribution of the returns under the optimal policy 7* should match a tar- get distribution defined by taking the distribution for the next state S;4, and action af,, = 7*(S;41), contracting it towards zero according to the discount, and shifting it by the reward (or distribution of rewards, in the stochas- tic case). A distributional variant of Q-learning is then de- rived by first constructing a new support for the target dis- tribution, and then minimizing the Kullbeck-Leibler diver- gence between the distribution d, and the target distribution dy = (Rigi + V412, Dg(St41, G41);
# Dx (®zd;||dz)
Dx (®zd;||dz) - (3)
Here ®, is a L2-projection of the target distribution onto the fixed support z, and @,; = argmax, qg($141,@) is the greedy action with respect to the mean action values Gq(Si41,4) = 2! po(S141,q) in state S141. | 1710.02298#14 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 15 | As in the non-distributional case, we can use a frozen copy of the parameters θ to construct the target distribution. The parametrized distribution can be represented by a neu- ral network, as in DQN, but with Natoms à Nactions outputs. A softmax is applied independently for each action dimension of the output to ensure that the distribution for each action is appropriately normalized.
Noisy Nets. The limitations of exploring using e-greedy policies are clear in games such as Montezumaâs Revenge, where many actions must be executed to collect the first re- ward. Noisy Nets (Fortunato et al. 2017) propose a noisy linear layer that combines a deterministic and noisy stream,
y = (b+ Wa) + (Bnoisy © â¬? + (Wnoisy © â¬â)a), (4)
where e? and ¢â are random variables, and © denotes the element-wise product. This transformation can then be used in place of the standard linear y = b + Wa. Over time, the network can learn to ignore the noisy stream, but will do so at different rates in different parts of the state space, allowing state-conditional exploration with a form of self-annealing. | 1710.02298#15 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 16 | The Integrated Agent In this paper we integrate all the aforementioned compo- nents into a single integrated agent, which we call Rainbow. First, we replace the 1-step distributional loss (3) with a multi-step variant. We construct the target distribution by contracting the value distribution in St+n according to the cumulative discount, and shifting it by the truncated n-step discounted return. This corresponds to deï¬ning the target distribution as d(n) t+n)). The resulting loss is
DKL(Φzd(n) t ||dt) ,
where, again, Φz is the projection onto z.
We combine the multi-step distributional loss with double Q-learning by using the greedy action in St+n selected ac- cording to the online network as the bootstrap action aâ t+n, and evaluating such action using the target network.
In standard proportional prioritized replay (Schaul et al. 2015) the absolute TD error is used to prioritize the tran- sitions. This can be computed in the distributional setting, using the mean action values. However, in our experiments all distributional Rainbow variants prioritize transitions by the KL loss, since this is what the algorithm is minimizing:
pt â DKL(Φzd(n) t ||dt) . | 1710.02298#16 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 17 | pt â DKL(Φzd(n) t ||dt) .
The KL loss as priority might be more robust to noisy stochastic environments because the loss can continue to de- crease even when the returns are not deterministic.
The network architecture is a dueling network architec- ture adapted for use with return distributions. The network
has a shared representation fξ(s), which is then fed into a value stream vη with Natoms outputs, and into an advantage stream aξ with Natoms à Nactions outputs, where ai ξ(fξ(s), a) will denote the output corresponding to atom i and action a. For each atom zi, the value and advantage streams are aggregated, as in dueling DQN, and then passed through a softmax layer to obtain the normalised parametric distribu- tions used to estimate the returnsâ distributions: Ï(Ï, a) â ai
exp(v;(9) + a,(¢,.@) ~ Zy(s)) oj exp(vn(d) + ay(d,4) â G(s) | po(s,a) =
ay(d,4) â Vy ai,
where ¢ = fe(s) and @i,(s) = y4â Vy ai, (9,0). | 1710.02298#17 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 18 | where ¢ = fe(s) and @i,(s) = y4â Vy ai, (9,0).
We then replace all linear layers with their noisy equiva- lent described in Equation (4). Within these noisy linear lay- ers we use factorised Gaussian noise (Fortunato et al. 2017) to reduce the number of independent noise variables.
Experimental Methods We now describe the methods and setup used for conï¬guring and evaluating the learning agents.
Evaluation Methodology. We evaluated all agents on 57 Atari 2600 games from the arcade learning environment (Bellemare et al. 2013). We follow the training and evalu- ation procedures of Mnih et al. (2015) and van Hasselt et al. (2016). The average scores of the agent are evaluated during training, every 1M steps in the environment, by suspending learning and evaluating the latest agent for 500K frames. Episodes are truncated at 108K frames (or 30 minutes of simulated play), as in van Hasselt et al. (2016). | 1710.02298#18 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 19 | Agentsâ scores are normalized, per game, so that 0% cor- responds to a random agent and 100% to the average score of a human expert. Normalized scores can be aggregated across all Atari levels to compare the performance of dif- ferent agents. It is common to track the median human nor- malized performance across all games. We also consider the number of games where the agentâs performance is above some fraction of human performance, to disentangle where improvements in the median come from. The mean human normalized performance is potentially less informative, as it is dominated by a few games (e.g., Atlantis) where agents achieve scores orders of magnitude higher than humans do. Besides tracking the median performance as a function of environment steps, at the end of training we re-evaluate the best agent snapshot using two different testing regimes. In the no-ops starts regime, we insert a random number (up to 30) of no-op actions at the beginning of each episode (as we do also in training). In the human starts regime, episodes are initialized with points randomly sampled from the initial portion of human expert trajectories (Nair et al. 2015); the difference between the two regimes indicates the extent to which the agent has over-ï¬t to its own trajectories. | 1710.02298#19 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 20 | Due to space constraints, we focus on aggregate results across games. However, in the appendix we provide full learning curves for all games and all agents, as well as de- tailed comparison tables of raw and normalized scores, in both the no-op and human starts testing regimes.
Hyper-parameter tuning. All Rainbowâs components have a number of hyper-parameters. The combinatorial space of hyper-parameters is too large for an exhaustive search, therefore we have performed limited tuning. For each component, we started with the values used in the paper that introduced this component, and tuned the most sensitive among hyper-parameters by manual coordinate descent.
DQN and its variants do not perform learning updates dur- ing the ï¬rst 200K frames, to ensure sufï¬ciently uncorrelated updates. We have found that, with prioritized replay, it is possible to start learning sooner, after only 80K frames. | 1710.02298#20 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 21 | DQN starts with an exploration ⬠of 1, corresponding to acting uniformly at random; it anneals the amount of explo- ration over the first 4M frames, to a final value of 0.1 (low- ered to 0.01 in later variants). Whenever using Noisy Nets, we acted fully greedily (⬠= 0), with a value of 0.5 for the ao hyper-parameter used to initialize the weights in the noisy stream!. For agents without Noisy Nets, we used ¢-greedy but decreased the exploration rate faster than was previously used, annealing ⬠to 0.01 in the first 250 frames.
We used the Adam optimizer (Kingma and Ba 2014), which we found less sensitive to the choice of the learn- ing rate than RMSProp. DQN uses a learning rate of a = 0.00025 In all Rainbowâs variants we used a learning rate of a/4, selected among {a/2,a/4,a/6}, and a value of 1.5 x 1074 for Adamâs ⬠hyper-parameter. | 1710.02298#21 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 22 | For replay prioritization we used the recommended pro- portional variant, with priority exponent Ï of 0.5, and lin- early increased the importance sampling exponent β from 0.4 to 1 over the course of training. The priority exponent Ï was tuned comparing values of {0.4, 0.5, 0.7}. Using the KL loss of distributional DQN as priority, we have observed that performance is very robust to the choice of Ï.
The value of n in multi-step learning is a sensitive hyper-parameter of Rainbow. We compared values of n = 1, 3, and 5. We observed that both n = 3 and 5 did well initially, but overall n = 3 performed the best by the end.
The hyper-parameters (see Table 1) are identical across all 57 games, i.e., the Rainbow agent really is a single agent setup that performs well across all the games.
1The noise was generated on the GPU. Tensorï¬ow noise gen- eration can be unreliable on GPU. If generating the noise on the CPU, lowering Ï0 to 0.1 may be helpful. | 1710.02298#22 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 23 | Parameter Value Min history to start learning 80K frames Adam learning rate 0.0000625 Exploration ⬠0.0 Noisy Nets 00 0.5 Target Network Period 32K frames Adam ⬠1.5 x 10-4 Prioritization type proportional Prioritization exponent w 0.5 Prioritization importance sampling 3 0.4 > 1.0 Multi-step returns n 3 Distributional atoms 51 Distributional min/max values {â10, 10]
Table 1: Rainbow hyper-parameters
#games > 20% human #games > 50% human #games > 100% human #games > 200% human #games > 500% human 57 DQN â DDQN â Prioritized DDN 3 40 â Dueling DDQN E â ABC 5 â Distributional DQN 2 â Noisy DQN 8 25 â Rainbow 5 2 10 57 DQN == no double == no priority Ff =~ no dueling E == no multi-step hed =~ no distribution 5 == no noisy 2 â Rainbow § Fa ie) 50 100 150 200 ie) 50 100 150 200 ie) 50 Millions of frames Millions of frames Millions of frames 150 200 i) 50 100 150 200 ie) 50 100 150 200 Millions of frames Millions of frames | 1710.02298#23 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 24 | Figure 2: Each plot shows, for several agents, the number of games where they have achieved at least a given fraction of human performance, as a function of time. From left to right we consider the 20%, 50%, 100%, 200% and 500% thresholds. On the ï¬rst row we compare Rainbow to the baselines. On the second row we compare Rainbow to its ablations.
Analysis In this section we analyse the main experimental results. First, we show that Rainbow compares favorably to several published agents. Then we perform ablation studies, com- paring several variants of the agent, each corresponding to removing a single component from Rainbow.
mance. This allows us to identify where the overall improve- ments in performance come from. Note that the gap in per- formance between Rainbow and other agents is apparent at all levels of performance: the Rainbow agent is improving scores on games where the baseline agents were already good, as well as improving in games where baseline agents are still far from human performance. | 1710.02298#24 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 25 | Comparison to published baselines. In Figure 1 we com- pare the Rainbowâs performance (measured in terms of the median human normalized score across games) to the corre- sponding curves for A3C, DQN, DDQN, Prioritized DDQN, Dueling DDQN, Distributional DQN, and Noisy DQN. We thank the authors of the Dueling and Prioritized agents for providing the learning curves of these, and report our own re-runs for DQN, A3C, DDQN, Distributional DQN and Noisy DQN. The performance of Rainbow is signiï¬cantly better than any of the baselines, both in data efï¬ciency, as well as in ï¬nal performance. Note that we match ï¬nal per- formance of DQN after 7M frames, surpass the best ï¬nal performance of these baselines in 44M frames, and reach substantially improved ï¬nal performance.
In the ï¬nal evaluations of the agent, after the end of train- ing, Rainbow achieves a median score of 223% in the no-ops regime; in the human starts regime we measured a median score of 153%. In Table 2 we compare these scores to the published median scores of the individual baselines. | 1710.02298#25 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 26 | In Figure 2 (top row) we plot the number of games where an agent has reached some speciï¬ed level of human normal- ized performance. From left to right, the subplots show on how many games the different agents have achieved 20%, 50%, 100%, 200% and 500% human normalized perforLearning speed. As in the original DQN setup, we ran each agent on a single GPU. The 7M frames required to match DQNâs ï¬nal performance correspond to less than 10 hours of wall-clock time. A full run of 200M frames cor- responds to approximately 10 days, and this varies by less than 20% between all of the discussed variants. The literaAgent DQN DDQN (*) Prioritized DDQN (*) Dueling DDQN (*) A3C (*) Noisy DQN Distributional DQN Rainbow no-ops 79% 117% 140% 151% - 118% 164% 223% human starts 68% 110% 128% 117% 116% 102% 125% 153% | 1710.02298#26 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 27 | Table 2: Median normalized scores of the best agent snap- shots for Rainbow and baselines. For methods marked with an asterisk, the scores come from the corresponding publica- tion. DQNâs scores comes from the dueling networks paper, since DQNâs paper did not report scores for all 57 games. The others scores come from our own implementations.
DQN â no double â no priority â no dueling 200% no multi-step â no distribution â no noisy == Rainbow 100% Median normalized score L 50 100 150 200 Millions of frames
Figure 3: Median human-normalized performance across 57 Atari games, as a function of time. We compare our in- tegrated agent (rainbow-colored) to DQN (gray) and to six different ablations (dashed lines). Curves are smoothed with a moving average over 5 points. | 1710.02298#27 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 28 | ture contains many alternative training setups that improve performance as a function of wall-clock time by exploiting parallelism, e.g., Nair et al. (2015), Salimans et al. (2017), and Mnih et al. (2016). Properly relating the performance across such very different hardware/compute resources is non-trivial, so we focused exclusively on algorithmic vari- ations, allowing apples-to-apples comparisons. While we consider them to be important and complementary, we leave questions of scalability and parallelism to future work.
Ablation studies. Since Rainbow integrates several differ- ent ideas into a single agent, we conducted additional exper- iments to understand the contribution of the various compo- nents, in the context of this speciï¬c combination.
To gain a better understanding of the contribution of each component to the Rainbow agent, we performed ablation studies. In each ablation, we removed one component from the full Rainbow combination. Figure 3 shows a compari- son for median normalized score of the full Rainbow to six ablated variants. Figure 2 (bottom row) shows a more de- tailed breakdown of how these ablations perform relative to different thresholds of human normalized performance, and Figure 4 shows the gain or loss from each ablation for every game, averaged over the full learning run. | 1710.02298#28 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 29 | Prioritized replay and multi-step learning were the two most crucial components of Rainbow, in that removing ei- ther component caused a large drop in median performance. Unsurprisingly, the removal of either of these hurt early per- formance. Perhaps more surprisingly, the removal of multi- step learning also hurt ï¬nal performance. Zooming in on in- dividual games (Figure 4), we see both components helped
almost uniformly across games (the full Rainbow performed better than either ablation in 53 games out of 57).
Distributional Q-learning ranked immediately below the previous techniques for relevance to the agentâs perfor- mance. Notably, in early learning no difference is appar- ent, as shown in Figure 3, where for the ï¬rst 40 million frames the distributional-ablation performed as well as the full agent. However, without distributions, the performance of the agent then started lagging behind. When the results are separated relatively to human performance in Figure 2, we see that the distributional-ablation primarily seems to lags on games that are above human level or near it. | 1710.02298#29 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 30 | In terms of median performance, the agent performed better when Noisy Nets were included; when these are re- moved and exploration is delegated to the traditional e- greedy mechanism, performance was worse in aggregate (red line in Figure 3). While the removal of Noisy Nets pro- duced a large drop in performance for several games, it also provided small increases in other games (Figure 4).
In aggregate, we did not observe a signiï¬cant difference when removing the dueling network from the full Rainbow. The median score, however, hides the fact that the impact of Dueling differed between games, as shown by Figure 4. Figure 2 shows that Dueling perhaps provided some im- provement on games with above-human performance levels (# games > 200%), and some degradation on games with sub-human performance (# games > 20%). | 1710.02298#30 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 31 | Also in the case of double Q-learning, the observed differ- ence in median performance (Figure 3) is limited, with the component sometimes harming or helping depending on the game (Figure 4). To further investigate the role of double Q- learning, we compared the predictions of our trained agents to the actual discounted returns computed from clipped re- wards. Comparing Rainbow to the agent where double Q- learning was ablated, we observed that the actual returns are often higher than 10 and therefore fall outside the support of the distribution, spanning from â10 to +10. This leads to underestimated returns, rather than overestimations. We hy- pothesize that clipping the values to this constrained range counteracts the overestimation bias of Q-learning. Note, however, that the importance of double Q-learning may in- crease if the support of the distributions is expanded.
In the appendix, for each game we show ï¬nal performance and learning curves for Rainbow, its ablations, and baselines. | 1710.02298#31 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 32 | In the appendix, for each game we show ï¬nal performance and learning curves for Rainbow, its ablations, and baselines.
Discussion We have demonstrated that several improvements to DQN can be successfully integrated into a single learning algo- rithm that achieves state-of-the-art performance. Moreover, we have shown that within the integrated algorithm, all but one of the components provided clear performance bene- ï¬ts. There are many more algorithmic components that we were not able to include, which would be promising candi- dates for further experiments on integrated agents. Among the many possible candidates, we discuss several below. | 1710.02298#32 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 33 | We have focused here on value-based methods in the Q-learning family. We have not considered purely policy- based RL algorithms such as trust-region policy optimisaa a a tie DQN gg DQN SE UO DQN > ye DQN soon ge gee DQN ee Rainbow no multi-step no distribution _no noisy no dueling no priority DQN SESEGL RS ST SSS es seseeo esl res sess sae geeesHEsoseyeeysceesesasas SSavroeectvdoesvs¢yscvaase ssssyZRsgagttx~oZ Fue Esssesgseceveage§ 3s asc x TE Tee Se RE Oe eee Be ye S Bee Beek Ve TES eee RES EAL BELGE setters SSoeeserosoezsah 238 FS 8 Ws g° st agr ses agrrvEeagsg s co gES sossuga Be vwES 3592 S$ ees fia ES fog ft, o ven 2 = rey =, os = oad oy ega JR £8 £ 2a SEES Es 8 eg esege a § go 2 . cy $4 a5 68 ⬠se 8 => 5 28 5 a 2 2 § 5 ⬠| 1710.02298#33 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 34 | Figure 4: Performance drops of ablation agents on all 57 Atari games. Performance is the area under the learning curve, normalized relative to the Rainbow agent and DQN. Two games where DQN outperforms Rainbow are omitted. The ablation leading to the strongest drop is highlighted for each game. The removal of either prioritization or multi-step learning reduces performance across most games, but the contribution of each component varies substantially per game.
tion (Schulman et al. 2015), nor actor-critic methods (Mnih et al. 2016; OâDonoghue et al. 2016).
A number of algorithms exploit a sequence of data to achieve improved learning efï¬ciency. Optimality tightening (He et al. 2016) uses multi-step returns to construct addi- tional inequality bounds, instead of using them to replace the 1-step targets used in Q-learning. Eligibility traces al- low a soft combination over n-step returns (Sutton 1988). However, sequential methods all leverage more computa- tion per gradient than the multi-step targets used in Rainbow. Furthermore, introducing prioritized sequence replay raises questions of how to store, replay and prioritise sequences. | 1710.02298#34 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 35 | Episodic control (Blundell et al. 2016) also focuses on data efï¬ciency, and was shown to be very effective in some domains. It improves early learning by using episodic mem- ory as a complementary learning system, capable of imme- diately re-enacting successful action sequences.
Besides Noisy Nets, numerous other exploration methods could also be useful algorithmic ingredients: among these Bootstrapped DQN (Osband et al. 2016), intrinsic motiva- tion (Stadie, Levine, and Abbeel 2015) and count-based ex- ploration (Bellemare et al. 2016). Integration of these alter- native components is fruitful subject for further research.
dates, without exploring alternative computational architec- tures. Asynchronous learning from parallel copies of the en- vironment, as in A3C (Mnih et al. 2016), Gorila (Nair et al. 2015), or Evolution Strategies (Salimans et al. 2017), can be effective in speeding up learning, at least in terms of wall- clock time. Note, however, they can be less data efï¬cient. | 1710.02298#35 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 36 | Hierarchical RL has also been applied with success to sev- eral complex Atari games. Among successful applications of HRL we highlight h-DQN (Kulkarni et al. 2016a) and Feu- dal Networks (Vezhnevets et al. 2017).
The state representation could also be made more efï¬- cient by exploiting auxiliary tasks such as pixel control or feature control (Jaderberg et al. 2016), supervised predic- tions (Dosovitskiy and Koltun 2016) or successor features (Kulkarni et al. 2016b).
To evaluate Rainbow fairly against the baselines, we have followed the common domain modiï¬cations of clipping re- wards, ï¬xed action-repetition, and frame-stacking, but these might be removed by other learning algorithm improve- ments. Pop-Art normalization (van Hasselt et al. 2016) al- lows reward clipping to be removed, while preserving a similar level of performance. Fine-grained action repetition (Sharma, Lakshminarayanan, and Ravindran 2017) enabled to learn how to repeat actions. A recurrent state network | 1710.02298#36 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 38 | References Bellemare, M. G.; Naddaf, Y.; Veness, J.; and Bowling, M. 2013. The arcade learning environment: An evaluation plat- form for general agents. J. Artif. Intell. Res. (JAIR) 47:253â 279. Bellemare, M. G.; Srinivasan, S.; Ostrovski, G.; Schaul, T.; Saxton, D.; and Munos, R. 2016. Unifying count-based exploration and intrinsic motivation. In NIPS. Bellemare, M. G.; Dabney, W.; and Munos, R. 2017. A dis- tributional perspective on reinforcement learning. In ICML. Blundell, C.; Uria, B.; Pritzel, A.; Li, Y.; Ruderman, A.; Leibo, J. Z.; Rae, J.; Wierstra, D.; and Hassabis, D. 2016. Model-Free Episodic Control. ArXiv e-prints. Dosovitskiy, A., and Koltun, V. 2016. Learning to act by predicting the future. CoRR abs/1611.01779. Fortunato, M.; Azar, M. G.; Piot, B.; Menick, J.; | 1710.02298#38 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 39 | by predicting the future. CoRR abs/1611.01779. Fortunato, M.; Azar, M. G.; Piot, B.; Menick, J.; Osband, I.; Graves, A.; Mnih, V.; Munos, R.; Hassabis, D.; Pietquin, O.; Blundell, C.; and Legg, S. 2017. Noisy networks for exploration. CoRR abs/1706.10295. Hausknecht, M., and Stone, P. 2015. Deep recurrent Q- arXiv preprint learning for partially observable MDPs. arXiv:1507.06527. He, F. S.; Liu, Y.; Schwing, A. G.; and Peng, J. 2016. Learn- ing to play in a day: Faster deep reinforcement learning by optimality tightening. CoRR abs/1611.01606. Jaderberg, M.; Mnih, V.; Czarnecki, W. M.; Schaul, T.; Leibo, J. Z.; Silver, D.; and Kavukcuoglu, K. 2016. Rein- forcement learning with unsupervised auxiliary tasks. CoRR abs/1611.05397. Kingma, D. P., and Ba, J. 2014. Adam: A method | 1710.02298#39 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 40 | forcement learning with unsupervised auxiliary tasks. CoRR abs/1611.05397. Kingma, D. P., and Ba, J. 2014. Adam: A method for stochastic optimization. In Proceedings of the 3rd Interna- tional Conference on Learning Representations (ICLR). Kulkarni, T. D.; Narasimhan, K.; Saeedi, A.; and Tenen- baum, J. B. 2016a. Hierarchical deep reinforcement learn- ing: Integrating temporal abstraction and intrinsic motiva- tion. CoRR abs/1604.06057. Kulkarni, T. D.; Saeedi, A.; Gautam, S.; and Gershman, S. J. 2016b. Deep successor reinforcement learning. arXiv preprint arXiv:1606.02396. Lin, L.-J. 1992. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning 8(3):293â321. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; and Riedmiller, M. A. 2013. Playing atari with deep reinforcement learning. CoRR | 1710.02298#40 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 42 | S.; and Hassabis, D. 2015. Human-level control through deep reinforcement learning. Nature 518(7540):529â533. Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; and Kavukcuoglu, K. 2016. Asyn- chronous methods for deep reinforcement learning. In In- ternational Conference on Machine Learning. Nair, A.; Srinivasan, P.; Blackwell, S.; Alcicek, C.; Fearon, R.; De Maria, A.; Panneershelvam, V.; Suleyman, M.; Beat- tie, C.; Petersen, S.; Legg, S.; Mnih, V.; Kavukcuoglu, K.; and Silver, D. 2015. Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296. OâDonoghue, B.; Munos, R.; Kavukcuoglu, K.; and Mnih, V. 2016. Pgq: Combining policy gradient and q-learning. CoRR abs/1611.01626. Osband, I.; Blundell, C.; Pritzel, A.; and Roy, B. V. 2016. | 1710.02298#42 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 43 | q-learning. CoRR abs/1611.01626. Osband, I.; Blundell, C.; Pritzel, A.; and Roy, B. V. 2016. Deep exploration via bootstrapped dqn. In NIPS. Salimans, T.; Ho, J.; Chen, X.; and Sutskever, I. 2017. Evo- lution strategies as a scalable alternative to reinforcement learning. CoRR abs/1703.03864. Schaul, T.; Quan, J.; Antonoglou, I.; and Silver, D. 2015. Prioritized experience replay. In Proc. of ICLR. Schulman, J.; Levine, S.; Moritz, P.; Jordan, M.; and Abbeel, P. 2015. Trust region policy optimization. In Proceedings of the 32Nd International Conference on International Con- ference on Machine Learning - Volume 37, ICMLâ15, 1889â 1897. JMLR.org. Sharma, S.; Lakshminarayanan, A. S.; and Ravindran, 2017. Learning to repeat: Fine grained action rep- B. arXiv preprint etition for deep reinforcement learning. arXiv:1702.06054. Stadie, B. C.; Levine, | 1710.02298#43 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 44 | action rep- B. arXiv preprint etition for deep reinforcement learning. arXiv:1702.06054. Stadie, B. C.; Levine, S.; and Abbeel, P. 2015. Incentivizing exploration in reinforcement learning with deep predictive models. CoRR abs/1507.00814. Sutton, R. S., and Barto, A. G. 1998. Reinforcement Learn- ing: An Introduction. The MIT press, Cambridge MA. Sutton, R. S. 1988. Learning to predict by the methods of temporal differences. Machine learning 3(1):9â44. Tieleman, T., and Hinton, G. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent mag- nitude. COURSERA: Neural networks for machine learning 4(2):26â31. van Hasselt, H.; Guez, A.; Guez, A.; Hessel, M.; Mnih, V.; and Silver, D. 2016. Learning values across many orders of magnitude. In Advances in Neural Information Processing Systems 29, 4287â4295. van Hasselt, H.; Guez, A.; and Silver, D. 2016. Deep re- In Proc. | 1710.02298#44 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 45 | Information Processing Systems 29, 4287â4295. van Hasselt, H.; Guez, A.; and Silver, D. 2016. Deep re- In Proc. of inforcement learning with double Q-learning. AAAI, 2094â2100. van Hasselt, H. 2010. Double Q-learning. In Advances in Neural Information Processing Systems 23, 2613â2621. Vezhnevets, A. S.; Osindero, S.; Schaul, T.; Heess, N.; Jader- berg, M.; Silver, D.; and Kavukcuoglu, K. 2017. Feu- dal networks for hierarchical reinforcement learning. CoRR abs/1703.01161. | 1710.02298#45 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 46 | Wang, Z.; Schaul, T.; Hessel, M.; van Hasselt, H.; Lanctot, M.; and de Freitas, N. 2016. Dueling network architec- tures for deep reinforcement learning. In Proceedings of The 33rd International Conference on Machine Learning, 1995â 2003.
# Appendix
Table 3 lists the preprocessing of environment frames, rewards and discounts introduced by DQN. Table 4 lists the additional hyper-parameters that Rainbow inherits from DQN and the other baselines considered in this paper. The hyper-parameters for which Rainbow uses non standard settings are instead listed in the main text. In the subsequent pages, we list the tables showing, for each game, the score achieved by Rainbow and several baselines in both the no-ops regime (Table 6) and the human-starts regime (Table 5). In Figures 5 and 6 we also plot, for each game, the learning curves of Rainbow, several baselines, and all ablation experiments. These learning curves are smoothed with a moving average over a window of 10.
Hyper-parameter Grey-scaling Observation down-sampling Frames stacked Action repetitions Reward clipping Terminal on loss of life Max frames per episode value True (84, 84) 4 4 [-1, 1] True 108K | 1710.02298#46 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 47 | Table 3: Preprocessing: the values of these hyper-parameters are the same used by DQN and its variants. They are here listed for completeness. Observations are grey-scaled and rescaled to 84 Ã 84 pixels. 4 consecutive frames are concatenated as each stateâs representation. Each action selected by the agent is repeated for 4 times. Rewards are clipped between â1, +1. In games where the player has multiple lives, transitions associated to the loss of a life are considered terminal. All episodes are capped after 108K frames.
Hyper-parameter Q network: channels Q network: ï¬lter size Q network: stride Q network: hidden units Q network: output units Number of actions Discount factor Memory size Replay period Minibatch size | 1710.02298#47 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 48 | Table 4: Additional hyper-parameters: the values of these hyper-parameters are the same used by DQN and itâs variants. The network has 3 convolutional layers: with 32, 64 and 64 channels. The layers use 8 à 8, 4 à 4, 3 à 3 ï¬lters with strides of 4, 2, 1, respectively. The value and advantage streams of the dueling architecture have both a hidden layer with 512 units. The output layer of the network has a number of units equal to the number of actions available in the game. We use a discount factor of 0.99, which is set to 0 on terminal transitions. We perform a learning update every 4 agent steps, using mini-batches of 32 transitions.
Game DQN A3C DDQN Prior. DDQN Duel. DDQN Distrib. DQN Noisy DQN Rainbow | 1710.02298#48 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 49 | Game DQN A3C DDQN Prior. DDQN Duel. DDQN Distrib. DQN Noisy DQN Rainbow
alien amidar assault asterix asteroids atlantis bank heist battle zone beam rider berzerk bowling boxing breakout centipede chopper command crazy climber defender demon attack double dunk enduro ï¬shing derby freeway frostbite gopher gravitar hero ice hockey kangaroo krull kung fu master montezuma revenge ms pacman name this game phoenix pitfall pong private eye qbert road runner robotank seaquest skiing solaris space invaders star gunner surround tennis time pilot tutankham venture video pinball wizard of wor yars revenge zaxxon | 1710.02298#49 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 54 | 1,486.5 172.7 3,994.8 15,840.0 2,035.4 445,360.0 1,129.3 31,320.0 14,591.3 910.6 65.7 77.3 411.6 4,881.0 3,784.0 124,566.0 33,996.0 56,322.8 -0.8 2,077.4 -4.1 0.2 2,332.4 20,051.4 297.0 15,207.9 -1.3 10,334.0 8,051.6 24,288.0 22.0 2,250.6 11,185.1 20,410.5 -46.9 18.8 292.6 14,175.8 58,549.0 62.0 37,361.6 -11,928.0 1,768.4 5,993.1 90,804.0 4.0 4.4 6,601.0 48.0 200.0 110,976.2 7,054.0 25,976.5 10,164.0 | 1710.02298#54 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 55 | 1,997.5 237.7 5,101.3 395,599.5 2,071.7 289,803.0 835.6 32,250.0 15,002.4 1,000.0 76.8 62.1 548.7 7,476.9 9,600.5 154,416.5 32,246.0 109,856.6 -3.7 2,133.4 -4.9 28.8 2,813.9 27,778.3 422.0 28,554.2 -0.1 9,555.5 6,757.8 33,890.0 130.0 2,064.1 11,382.3 31,358.3 -342.8 18.9 5,717.5 15,035.9 56,086.0 49.8 3,275.4 -13,247.7 2,530.2 6,368.6 67,054.5 4.5 22.6 7,684.5 124.3 462.0 455,052.7 11,824.5 8,267.7 15,130.0 | 1710.02298#55 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 58 | 6,022.9 202.8 14,491.7 280,114.0 2,249.4 814,684.0 826.0 52,040.0 21,768.5 1,793.4 39.4 54.9 379.5 7,160.9 10,916.0 143,962.0 47,671.3 109,670.7 -0.6 2,061.1 22.6 29.1 4,141.1 72,595.7 567.5 50,496.8 -0.7 10,841.0 6,715.5 28,999.8 154.0 2,570.2 11,686.5 103,061.6 -37.6 19.0 1,704.4 18,397.6 54,261.0 55.2 19,176.0 -11,685.8 2,860.7 12,629.0 123,853.0 7.0 -2.2 11,190.5 126.9 45.0 506,817.2 14,631.5 93,007.9 19,658.0
Game DQN DDQN Prior. DDQN Duel. DDQN Distrib. DQN Noisy DQN Rainbow | 1710.02298#58 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 59 | Game DQN DDQN Prior. DDQN Duel. DDQN Distrib. DQN Noisy DQN Rainbow
alien amidar assault asterix asteroids atlantis bank heist battle zone beam rider berzerk bowling boxing breakout centipede chopper command crazy climber defender demon attack double dunk enduro ï¬shing derby freeway frostbite gopher gravitar hero ice hockey kangaroo krull kung fu master montezuma revenge ms pacman name this game phoenix pitfall pong private eye qbert road runner robotank seaquest skiing solaris space invaders star gunner surround tennis time pilot tutankham venture video pinball wizard of wor yars revenge zaxxon | 1710.02298#59 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 62 | 6,648.6 2,051.8 7,965.7 41,268.0 1,699.3 427,658.0 1,126.8 38,130.0 22,430.7 1,614.2 62.6 98.8 381.5 5,175.4 5,135.0 183,137.0 24,162.5 70,171.8 4.8 2,155.0 30.2 32.9 3,421.6 49,097.4 330.5 27,153.9 0.3 14,492.0 10,263.1 43,470.0 0.0 4,751.2 13,439.4 32,808.3 0.0 20.7 200.0 18,802.8 62,785.0 58.6 44,417.4 -9,900.5 1,710.8 7,696.9 56,641.0 2.1 0.0 11,448.0 87.2 863.0 406,420.4 10,373.0 16,451.7 13,490.0 | 1710.02298#62 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 63 | 4,461.4 2,354.5 4,621.0 28,188.0 2,837.7 382,572.0 1,611.9 37,150.0 12,164.0 1,472.6 65.5 99.4 345.3 7,561.4 11,215.0 143,570.0 42,214.0 60,813.3 0.1 2,258.2 46.4 0.0 4,672.8 15,718.4 588.0 20,818.2 0.5 14,854.0 11,451.9 34,294.0 0.0 6,283.5 11,971.1 23,092.2 0.0 21.0 103.0 19,220.3 69,524.0 65.3 50,254.2 -8,857.4 2,250.8 6,427.3 89,238.0 4.4 5.1 11,666.0 211.4 497.0 98,209.5 7,855.0 49,622.1 12,944.0 | 1710.02298#63 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 64 | 4,055.8 1,267.9 5,909.0 400,529.5 2,354.7 273,895.0 1,056.7 41,145.0 13,213.4 1,421.8 74.1 98.1 612.5 9,015.5 13,136.0 178,355.0 37,896.8 110,626.5 -3.8 2,259.3 9.1 33.6 3,938.2 28,841.0 681.0 33,860.9 1.3 12,909.0 9,885.9 43,009.0 367.0 3,769.2 12,983.6 34,775.0 -2.1 20.8 15,172.9 16,956.0 63,366.0 54.2 4,754.4 -14,959.8 5,643.1 6,869.1 69,306.5 6.2 23.6 7,875.0 249.4 1,107.0 478,646.7 15,994.5 16,608.6 18,347.5 | 1710.02298#64 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
1710.02298 | 66 | 9,491.7 5,131.2 14,198.5 428,200.3 2,712.8 826,659.5 1,358.0 62,010.0 16,850.2 2,545.6 30.0 99.6 417.5 8,167.3 16,654.0 168,788.5 55,105.0 111,185.2 -0.3 2,125.9 31.3 34.0 9,590.5 70,354.6 1,419.3 55,887.4 1.1 14,637.5 8,741.5 52,181.0 384.0 5,380.4 13,136.0 108,528.6 0.0 20.9 4,234.0 33,817.5 62,041.0 61.4 15,898.9 -12,957.8 3,560.3 18,789.0 127,029.0 9.7 -0.0 12,926.0 241.0 5.5 533,936.5 17,862.5 102,557.0 22,209.5
â | 1710.02298#66 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | [
{
"id": "1507.06527"
},
{
"id": "1702.06054"
},
{
"id": "1606.02396"
},
{
"id": "1507.04296"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.