doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1605.02688
69
[10] Sander Dieleman, Jan Schl¨uter, Colin Raffel, Eben Olson, Søren Kaae Sønderby, Daniel Nouri, Daniel Maturana, Martin Thoma, Eric Battenberg, Jack Kelly, Jeffrey De Fauw, Michael Heilman, diogo149, Brian McFee, Hendrik Weideman, takacsg84, peterderivaz, Jon, instagibbs, Dr. Kashif Rasul, CongLiu, Britefury, and Jonas Degrave, “Lasagne: First release.” (2015). [11] Franc¸ois Chollet, “Keras,” https://github.com/fchollet/keras (2015). [12] John Salvatier, Thomas V. Wiecki, and Christopher Fonnesbeck, “Probabilistic programming in Python using PyMC3,” PeerJ Computer Science 2, e55 (2016).
1605.02688#69
Theano: A Python framework for fast computation of mathematical expressions
Theano is a Python library that allows to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Since its introduction, it has been one of the most used CPU and GPU mathematical compilers - especially in the machine learning community - and has shown steady performance improvements. Theano is being actively and continuously developed since 2008, multiple frameworks have been built on top of it and it has been used to produce many state-of-the-art machine learning models. The present article is structured as follows. Section I provides an overview of the Theano software and its community. Section II presents the principal features of Theano and how to use them, and compares them with other similar projects. Section III focuses on recently-introduced functionalities and improvements. Section IV compares the performance of Theano against Torch7 and TensorFlow on several machine learning models. Section V discusses current limitations of Theano and potential ways of improving it.
http://arxiv.org/pdf/1605.02688
The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang
cs.SC, cs.LG, cs.MS
19 pages, 5 figures
null
cs.SC
20160509
20160509
[]
1605.02688
70
Science 2, e55 (2016). [13] Arvind and David E. Culler, “Dataflow architectures,” Annual Review of Computer Science 1, 225–253 (1986). [14] Barak A. Pearlmutter, “Fast exact multiplication by the Hessian,” Neural Computation 6, 147–160 (1994). [15] Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang, “MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems,” arXiv e-prints abs/1512.01274 (2015). [16] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell, “Caffe: Convolutional architecture for fast feature embedding,” arXiv e-prints abs/1408.5093 (2014). [17] Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton, “Chainer: a next-generation open source framework for deep learning,” in Workshop on Machine Learning Systems (LearningSys), NIPS (2015).
1605.02688#70
Theano: A Python framework for fast computation of mathematical expressions
Theano is a Python library that allows to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Since its introduction, it has been one of the most used CPU and GPU mathematical compilers - especially in the machine learning community - and has shown steady performance improvements. Theano is being actively and continuously developed since 2008, multiple frameworks have been built on top of it and it has been used to produce many state-of-the-art machine learning models. The present article is structured as follows. Section I provides an overview of the Theano software and its community. Section II presents the principal features of Theano and how to use them, and compares them with other similar projects. Section III focuses on recently-introduced functionalities and improvements. Section IV compares the performance of Theano against Torch7 and TensorFlow on several machine learning models. Section V discusses current limitations of Theano and potential ways of improving it.
http://arxiv.org/pdf/1605.02688
The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang
cs.SC, cs.LG, cs.MS
19 pages, 5 figures
null
cs.SC
20160509
20160509
[]
1605.02688
71
[18] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012) pp. 1097–1105. [19] V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” arXiv e-prints abs/1603.07285 (2016). [20] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer, “cuDNN: Efficient primitives for deep learning,” arXiv e-prints abs/1410.0759 (2014). [21] Fr´ed´eric Bastien, Arnaud Bergeron, Andreas Kl¨ockner, Pascal Vincent, and Yoshua Bengio, “A common GPU n-dimensional array for Python and C,” in Big Learning Workshop, NIPS (2011).
1605.02688#71
Theano: A Python framework for fast computation of mathematical expressions
Theano is a Python library that allows to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Since its introduction, it has been one of the most used CPU and GPU mathematical compilers - especially in the machine learning community - and has shown steady performance improvements. Theano is being actively and continuously developed since 2008, multiple frameworks have been built on top of it and it has been used to produce many state-of-the-art machine learning models. The present article is structured as follows. Section I provides an overview of the Theano software and its community. Section II presents the principal features of Theano and how to use them, and compares them with other similar projects. Section III focuses on recently-introduced functionalities and improvements. Section IV compares the performance of Theano against Torch7 and TensorFlow on several machine learning models. Section V discusses current limitations of Theano and potential ways of improving it.
http://arxiv.org/pdf/1605.02688
The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang
cs.SC, cs.LG, cs.MS
19 pages, 5 figures
null
cs.SC
20160509
20160509
[]
1605.02688
72
[22] Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc’Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, and Andrew Y. Ng, “Large scale distributed deep networks,” in Advances in Neural Information Processing Systems (2012) pp. 1223–1231. [23] Sixin Zhang, Anna E Choromanska, and Yann LeCun, “Deep learning with elastic averaging SGD,” in Advances in Neural Information Processing Systems (2015) pp. 685–693. [24] Alex Krizhevsky, “One weird trick for parallelizing convolutional neural networks,” arXiv e-prints abs/1404.5997 (2014). [25] Pierre Sermanet, David Eigen, Xiang Zhang, Micha¨el Mathieu, Rob Fergus, and Yann LeCun, “OverFeat: Integrated recognition, localization and detection using convolutional networks,” arXiv e-prints abs/1312.6229 (2013). [26] Karen Simonyan and Andrew Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv e-prints abs/1409.1556 (2014).
1605.02688#72
Theano: A Python framework for fast computation of mathematical expressions
Theano is a Python library that allows to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Since its introduction, it has been one of the most used CPU and GPU mathematical compilers - especially in the machine learning community - and has shown steady performance improvements. Theano is being actively and continuously developed since 2008, multiple frameworks have been built on top of it and it has been used to produce many state-of-the-art machine learning models. The present article is structured as follows. Section I provides an overview of the Theano software and its community. Section II presents the principal features of Theano and how to use them, and compares them with other similar projects. Section III focuses on recently-introduced functionalities and improvements. Section IV compares the performance of Theano against Torch7 and TensorFlow on several machine learning models. Section V discusses current limitations of Theano and potential ways of improving it.
http://arxiv.org/pdf/1605.02688
The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang
cs.SC, cs.LG, cs.MS
19 pages, 5 figures
null
cs.SC
20160509
20160509
[]
1605.02688
73
[27] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich, “Going deeper with convolutions,” in Computer Vision and Pattern Recognition (CVPR) (2015). 18 [28] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals, “Recurrent neural network regularization,” arXiv e-prints abs/1409.2329 (2014). [29] Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville, “Describing videos by exploiting temporal structure,” in Computer Vision (ICCV), 2015 IEEE International Conference on (IEEE, 2015). [30] Minsoo Rhu, Natalia Gimelshein, Jason Clemons, Arslan Zulfiqar, and Stephen W. Keckler, “Virtualizing Deep Neural Networks for Memory-Efficient Neural Network Design,” arXiv e-prints abs/1602.08124 (2016).
1605.02688#73
Theano: A Python framework for fast computation of mathematical expressions
Theano is a Python library that allows to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Since its introduction, it has been one of the most used CPU and GPU mathematical compilers - especially in the machine learning community - and has shown steady performance improvements. Theano is being actively and continuously developed since 2008, multiple frameworks have been built on top of it and it has been used to produce many state-of-the-art machine learning models. The present article is structured as follows. Section I provides an overview of the Theano software and its community. Section II presents the principal features of Theano and how to use them, and compares them with other similar projects. Section III focuses on recently-introduced functionalities and improvements. Section IV compares the performance of Theano against Torch7 and TensorFlow on several machine learning models. Section V discusses current limitations of Theano and potential ways of improving it.
http://arxiv.org/pdf/1605.02688
The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang
cs.SC, cs.LG, cs.MS
19 pages, 5 figures
null
cs.SC
20160509
20160509
[]
1604.06778
0
6 1 0 2 y a M 7 2 ] G L . s c [ 3 v 8 7 7 6 0 . 4 0 6 1 : v i X r a # Benchmarking Deep Reinforcement Learning for Continuous Control Yan Duan† Xi Chen† Rein Houthooft†‡ John Schulman†§ Pieter Abbeel† † University of California, Berkeley, Department of Electrical Engineering and Computer Sciences ‡ Ghent University - iMinds, Department of Information Technology § OpenAI [email protected] [email protected] [email protected] [email protected] [email protected] # Abstract
1604.06778#0
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
1
# Abstract Recently, researchers have made significant progress combining the advances in deep learn- ing for learning feature representations with rein- forcement learning. Some notable examples in- clude training agents to play Atari games based on raw pixel data and to acquire advanced ma- nipulation skills using raw sensory inputs. How- ever, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of contin- uous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning al- gorithms. Both the benchmark and reference im- plementations are released at https://github.com/ rllab/rllab in order to facilitate experimental re- producibility and to encourage adoption by other researchers. # 1. Introduction Reinforcement learning addresses the problem of how agents should learn to take actions to maximize cumula- tive reward through interactions with the environment. The traditional approach for reinforcement learning algorithms requires carefully chosen feature representations, which are
1604.06778#1
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
2
Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). Also available at https://arxiv.org/abs/1604.06778 usually hand-engineered. Recently, significant progress has been made by combining advances in deep learning for learning feature representations (Krizhevsky et al., 2012; Hinton et al., 2012) with reinforcement learning, tracing back to much earlier work of Tesauro (1995) and Bert- sekas & Tsitsiklis (1995). Notable examples are training agents to play Atari games based on raw pixels (Guo et al., 2014; Mnih et al., 2015; Schulman et al., 2015a) and to acquire advanced manipulation skills using raw sensory in- puts (Levine et al., 2015; Lillicrap et al., 2015; Watter et al., 2015). Impressive results have also been obtained in train- ing deep neural network policies for 3D locomotion and manipulation tasks (Schulman et al., 2015a;b; Heess et al., 2015b).
1604.06778#2
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
3
Along with this recent progress, the Arcade Learning En- vironment (ALE) (Bellemare et al., 2013) has become a popular benchmark for evaluating algorithms designed for tasks with high-dimensional state inputs and discrete ac- tions. However, these algorithms do not always generalize straightforwardly to tasks with continuous actions, leading to a gap in our understanding. For instance, algorithms based on Q-learning quickly become infeasible when naive discretization of the action space is performed, due to the curse of dimensionality (Bellman, 1957; Lillicrap et al., 2015). In the continuous control domain, where actions are continuous and often high-dimensional, we argue that the existing control benchmarks fail to provide a compre- hensive set of challenging problems (see Section 7 for a review of existing benchmarks). Benchmarks have played a significant role in other areas such as computer vision and speech recognition. Examples include MNIST (Le- Cun et al., 1998), Caltech101 (Fei-Fei et al., 2006), CI- FAR (Krizhevsky & Hinton, 2009), ImageNet (Deng et al., 2009), PASCAL VOC (Everingham et al., 2010), BSDS500 (Martin et
1604.06778#3
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
5
Benchmarking Deep Reinforcement Learning for Continuous Control of a standardized and challenging testbed for reinforcement learning and continuous control makes it difficult to quan- tify scientific progress. Systematic evaluation and compar- ison will not only further our understanding of the strengths of existing algorithms, but also reveal their limitations and suggest directions for future research. We attempt to address this problem and present a bench- mark consisting of 31 continuous control tasks. These tasks range from simple tasks, such as cart-pole balanc- ing, to challenging tasks such as high-DOF locomotion, tasks with partial observations, and hierarchically struc- tured tasks. Furthermore, a range of reinforcement learn- ing algorithms are implemented on which we report novel findings based on a systematic evaluation of their effective- ness in training deep neural network policies. The bench- mark and reference implementations are available at https: //github.com/rllab/rllab, allowing for the development, im- plementation, and evaluation of new algorithms and tasks. # 2. Preliminaries In this section, we define the notation used in subsequent sections. in the supplementary materials and in the source code.
1604.06778#5
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
6
# 2. Preliminaries In this section, we define the notation used in subsequent sections. in the supplementary materials and in the source code. We choose to implement all tasks using physics simulators rather than symbolic equations, since the former approach is less error-prone and permits easy modification of each task. Tasks with simple dynamics are implemented using Box2D (Catto, 2011), an open-source, freely available 2D physics simulator. Tasks with more complicated dynam- ics, such as locomotion, are implemented using MuJoCo (Todorov et al., 2012), a 3D physics simulator with better modeling of contacts. # 3.1. Basic Tasks We implement five basic tasks that have been widely an- alyzed in reinforcement learning and control literature: Cart-Pole Balancing (Stephenson, 1908; Donaldson, 1960; Widrow, 1964; Michie & Chambers, 1968), Cart-Pole Swing Up (Kimura & Kobayashi, 1999; Doya, 2000), Mountain Car (Moore, 1990), Acrobot Swing Up (DeJong & Spong, 1994; Murray & Hauser, 1991; Doya, 2000), and Double Inverted Pendulum Balancing (Furuta et al., 1978). These relatively low-dimensional tasks provide quick eval- uations and comparisons of RL algorithms.
1604.06778#6
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
7
The implemented tasks conform to the standard interface of a finite-horizon discounted Markov decision process (MDP), defined by the tuple (S, A, P, r, ρ0, γ, T ), where S is a (possibly infinite) set of states, A is a set of actions, P : S ×A×S → R≥0 is the transition probability distribu- tion, r : S × A → R is the reward function, ρ0 : S → R≥0 is the initial state distribution, γ ∈ (0, 1] is the discount factor, and T is the horizon. For partially observable tasks, which conform to the in- terface of a partially observable Markov decision process (POMDP), two more components are required, namely Ω, a set of observations, and O : S × Ω → R≥0, the observa- tion probability distribution.
1604.06778#7
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
9
In this category, we implement six locomotion tasks of varying dynamics and difficulty: Swimmer (Purcell, 1977; Coulom, 2002; Levine & Koltun, 2013; Schulman et al., 2015a), Hopper (Murthy & Raibert, 1984; Erez et al., 2011; Levine & Koltun, 2013; Schulman et al., 2015a), Walker (Raibert & Hodgins, 1991; Erez et al., 2011; Levine & Koltun, 2013; Schulman et al., 2015a), Half-Cheetah (Wawrzy´nski, 2007; Heess et al., 2015b), Ant (Schulman et al., 2015b), Simple Humanoid (Tassa et al., 2012; Schul- man et al., 2015b), and Full Humanoid (Tassa et al., 2012). The goal for all the tasks is to move forward as quickly as possible. These tasks are more challenging than the basic tasks due to high degrees of freedom. In addition, a great amount of exploration is needed to learn to move forward without getting stuck at local optima. Since we penalize for excessive controls as well as falling over, during the initial stage of learning, when the robot is
1604.06778#9
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
11
For deterministic policies, we use the notation µθ : S → A to denote the policy instead. The objective for it has the same form as above, except that now we have at = µ(st). # 3.3. Partially Observable Tasks # 3. Tasks The tasks in the presented benchmark can be divided into four categories: basic tasks, locomotion tasks, partially ob- servable tasks, and hierarchical tasks. We briefly describe them in this section. More detailed specifications are given In real-life situations, agents are often not endowed with perfect state information. This can be due to sensor noise, sensor occlusions, or even sensor limitations that result in partial observations. To evaluate algorithms in more realis- tic settings, we implement three variations of partially obBenchmarking Deep Reinforcement Learning for Continuous Control (a) (e) (b) (f) (c) (g) (d)
1604.06778#11
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
13
“sy Figure 2. Illustration of hierarchical tasks: Food Collection; and (b) Locomotion + Maze. (a) Locomotion + Figure 1. Illustration of locomotion tasks: (a) Swimmer; (b) Hop- per; (c) Walker; (d) Half-Cheetah; (e) Ant; (f) Simple Humanoid; and (g) Full Humanoid. servable tasks for each of the five basic tasks described in Section 3.1, leading to a total of 15 additional tasks. These variations are described below. Limited Sensors: For this variation, we restrict the obser- vations to only provide positional information (including joint angles), excluding velocities. An agent now has to learn to infer velocity information in order to recover the full state. Similar tasks have been explored in Gomez & Miikkulainen (1998); Sch¨afer & Udluft (2005); Heess et al. (2015a); Wierstra et al. (2007). Locomotion + Food Collection: For this task, the agent needs to learn to control either the swimmer or the ant robot to collect food and avoid bombs in a finite region. The agent receives range sensor readings about nearby food and bomb units. It is given a positive reward when it reaches a food unit, or a negative reward when it reaches a bomb.
1604.06778#13
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
14
Locomotion + Maze: For this task, the agent needs to learn to control either the swimmer or the ant robot to reach a goal position in a fixed maze. The agent receives range sensor readings about nearby obstacles as well as its goal (when visible). A positive reward is given only when the robot reaches the goal region. # 4. Algorithms Noisy Observations and Delayed Actions: In this case, sensor noise is simulated through the addition of Gaussian noise to the observations. We also introduce a time de- lay between taking an action and the action being in effect, accounting for physical latencies (Hester & Stone, 2013). Agents now need to learn to integrate both past observa- tions and past actions to infer the current state. Similar tasks have been proposed in Bakker (2001). In this section, we briefly summarize the algorithms im- plemented in our benchmark, and note any modifications made to apply them to general parametrized policies. We implement a range of gradient-based policy search meth- ods, as well as two gradient-free methods for comparison with the gradient-based approaches. # 4.1. Batch Algorithms
1604.06778#14
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
15
# 4.1. Batch Algorithms System Identification: For this category, the underly- ing physical model parameters are varied across different episodes (Szita et al., 2003). The agents must learn to gen- eralize across different models, as well as to infer the model parameters from its observation and action history. # 3.4. Hierarchical Tasks Most of the implemented algorithms are batch algorithms. At each iteration, N trajectories {τi}N i=1 are generated, where τi = {(si t, ri t=0 contains data collected along the ith trajectory. For on-policy gradient-based methods, all the trajectories are sampled under the current policy. For gradient-free methods, they are sampled under perturbed versions of the current policy. Many real-world tasks exhibit hierarchical structure, where higher level decisions can reuse lower level skills (Parr & Russell, 1998; Sutton et al., 1999; Dietterich, 2000). For in- stance, robots can reuse locomotion skills when exploring the environment. We propose several tasks where both low- level motor controls and high-level decisions are needed. These two components each operates on a different time scale and calls for a natural hierarchy in order to efficiently learn the task.
1604.06778#15
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
16
REINFORCE (Williams, 1992): This algorithm estimates the gradient of expected return ∇θη(πθ) using the likeli- hood ratio trick: —. N T Von(t) = wd Vo log x(a'|s'; 0)(Ri — bi), i=1 1=0 . T bing . . where Ri} = S0,,_, 7! ~‘rj, and bj is a baseline that only depends on the state s; to reduce variance. Hereafter, an asBenchmarking Deep Reinforcement Learning for Continuous Control cent step is taken in the direction of the estimated gradient. This process continues until θk converges.
1604.06778#16
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
17
cent step is taken in the direction of the estimated gradient. This process continues until θk converges. Truncated Natural Policy Gradient (TNPG) (Kakade, 2002; Peters et al., 2003; Bagnell & Schneider, 2003; Schulman et al., 2015a): Natural Policy Gradient improves upon REINFORCE by computing an ascent direction that approximately ensures a small change in the policy distri- bution. This direction is derived to be I(θ)−1∇θη(πθ), where I(θ) is the Fisher information matrix (FIM). We use the step size suggested by Peters & Schaal (2008): δKL (∇θη(πθ)T I(θ)−1∇θη(πθ))−1. Finally, we reHere dx, > 0 controls the step size of the policy, and 6:(v) = ri + v"(6(s4) — 6(s;)) is the sample Bellman error. We then solve for the new policy parameters: M 1 *) Jn On41 = arg max 77 2 ei )/n log 7(a;|8;; 9).
1604.06778#17
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
18
M 1 *) Jn On41 = arg max 77 2 ei )/n log 7(a;|8;; 9). Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a): This algorithm allows more precise control on the expected policy improvement than TNPG through the introduction of a surrogate loss. At each iteration, we solve the following constrained optimization problem (re- placing expectations with samples): For neural network policies with tens of thousands of pa- rameters or more, generic Natural Policy Gradient incurs prohibitive computation cost by forming and inverting the empirical FIM. Instead, we study Truncated Natural Policy Gradient (TNPG) in this paper, which computes the nat- ural gradient direction without explicitly forming the ma- trix inverse, using a conjugate gradient algorithm that only requires computing I(θ)v for arbitrary vector v. TNPG makes it practical to apply natural gradient in policy search setting with high-dimensional parameters, and we refer the reader to Schulman et al. (2015a) for more details.
1604.06778#18
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
19
Reward-Weighted Regression (RWR) (Peters & Schaal, 2007; Kober & Peters, 2009): This algorithm formulates the policy optimization as an Expectation-Maximization problem to avoid the need to manually choose learning rate, and the method is guaranteed to converge to a lo- cally optimal solution. At each iteration, this algorithm optimizes a lower bound of the log-expected return: 9 = arg maxg £(6’), where 1 T NT > log (aj|s; 0) o( Ry — 6;) 1 t=0 Mz L£(0) = i . Ta maximizeg Esnpo, a~mo, oa) Ft Als a] s.t. Es~po, (Dxi (mo, (-|8)||to(-|s))] < Oxi where ρθ = ρπθ is the discounted state-visitation frequen- cies induced by πθ, Aθk (s, a), known as the advantage function, is estimated by the empirical return minus the baseline, and δKL is a step size parameter which controls how much the policy is allowed to change per iteration. We follow the procedure described in the original paper for solving the optimization, which results in the same descent direction as TNPG with an extra line search in the objective and KL constraint.
1604.06778#19
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
20
Cross Entropy Method (CEM) (Rubinstein, 1999; Szita & L˝orincz, 2006): Unlike previously mentioned meth- ods, which perform exploration through stochastic actions, CEM performs exploration directly in the policy parame- ter space. At each iteration, we produce N perturbations of the policy parameter: θi ∼ N (µk, Σk), and perform a rollout for each sampled parameter. Then, we compute the new mean and diagonal covariance using the parameters that correspond to the top q-quantile returns. Here, ρ : R → R≥0 is a function that transforms raw re- turns to nonnegative values. Following Deisenroth et al. (2013), we choose ρ to be ρ(R) = R − Rmin, where Rmin is the minimum return among all trajectories collected in the current iteration.
1604.06778#20
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
21
Relative Entropy Policy Search (REPS) (Peters et al., 2010): This algorithm limits the loss of information per iteration and aims to ensure a smooth learning progress (Deisenroth et al., 2013). At each iteration, we collect all trajectories into a dataset D = {(s;,a;,7i, ,) }44,, where M is the total number of samples. Then, we first solve for the dual parameters [7*,v*] = argmin,,,/ g(7',v’) s.t. 7 > 0, where Covariance Matrix Adaption Evolution Strategy (CMA-ES) (Hansen & Ostermeier, 2001): Similar to CEM, CMA-ES is a gradient-free evolutionary approach for optimizing nonconvex objective functions. In our case, this objective function equals the average sampled return. In contrast to CEM, CMA-ES estimates the covariance matrix of a multivariate normal distribution through incremental adaption along evolution paths, which contain information about the correlation between consecutive updates. # 4.2. Online Algorithms M _ fi 5:(¥)/n g(n,v) = nox, +n log (:i > ebiv)/n \
1604.06778#21
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
22
# 4.2. Online Algorithms M _ fi 5:(¥)/n g(n,v) = nox, +n log (:i > ebiv)/n Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015): Compared to batch algorithms, the DDPG algorithm continuously improves the policy as it explores the environment. It applies gradient descent to the policy Benchmarking Deep Reinforcement Learning for Continuous Control with minibatch data sampled from a replay pool, where the gradient is computed via > Va Q4(si, a) Von(ue) ) Vorto(si) la= He (Si where B is the batch size. The critic Q is trained via gradient descent on the (? loss of the Bellman er- ror L = 4 CH — Q¢(si,ai))?, where y; = 7; + Q',, (8), Hg (s;)). To improve stability of the algorithm, we use target networks for both the critic and the policy when forming the regression target y;. We refer the reader to Lillicrap et al. (2015) for a more detailed description of the algorithm. # 4.3. Recurrent Variants
1604.06778#22
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
23
# 4.3. Recurrent Variants Policy Representation: For basic, locomotion, and hier- archical tasks and for batch algorithms, we use a feed- forward neural network policy with 3 hidden layers, con- sisting of 100, 50, and 25 hidden units with tanh nonlin- earity at the first two hidden layers, which map each state to the mean of a Gaussian distribution. The log-standard deviation is parameterized by a global vector independent of the state, as done in Schulman et al. (2015a). For all par- tially observable tasks, we use a recurrent neural network with a single hidden layer consisting of 32 LSTM hidden units (Hochreiter & Schmidhuber, 1997). For the DDPG algorithm which trains a deterministic pol- icy, we follow Lillicrap et al. (2015). For both the policy and the Q function, we use the same architecture of a feed- forward neural network with 2 hidden layers, consisting of 400 and 300 hidden units with relu activations.
1604.06778#23
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
24
We implement direct applications of the aforemen- tioned batch-based algorithms to recurrent policies. The only modification required is to replace π(ai t) by π(ai 1:t and a1:t−1 are the histories of past and current observations and past actions. Recurrent versions of reinforcement learning algorithms have been studied in many existing works, such as Bakker (2001), Sch¨afer & Udluft (2005), Wierstra et al. (2007), and Heess et al. (2015a). # 5. Experiment Setup Baseline: For all gradient-based algorithms except REPS, we can subtract a baseline from the empirical return to re- duce variance of the optimization. We use a linear function as the baseline with a time-varying feature vector. # 6. Results and Discussion The main evaluation results are presented in Table 1. The tasks on which the grid search is performed are marked with (*). In each entry, the pair of numbers shows the mean and standard deviation of the normalized cumulative return using the best possible hyperparameters. In this section, we elaborate on the experimental setup used to generate the results.
1604.06778#24
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
25
In this section, we elaborate on the experimental setup used to generate the results. Performance Metrics: For each report unit (a particular al- gorithm running on a particular task), we define its perfor- n=1 Rin, where I is the num- mance as ber of training iterations, Ni is the number of trajectories collected in the ith iteration, and Rin is the undiscounted return for the nth trajectory of the ith iteration, Hyperparameter Tuning: For the DDPG algorithm, we used the hyperparametes reported in Lillicrap et al. (2015). For the other algorithms, we follow the approach in (Mnih et al., 2015), and we select two tasks in each category, on which a grid search of hyperparameters is performed. Each choice of hyperparameters is executed under five random seeds. The criterion for the best hyperparameters is de- fined as mean(returns) − std(returns). This metric se- lects against large fluctuations of performance due to overly large step sizes.
1604.06778#25
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
26
REINFORCE: Despite its simplicity, REINFORCE is an effective algorithm in optimizing deep neural network poli- cies in most basic and locomotion tasks. Even for high- DOF tasks like Ant, REINFORCE can achieve competi- tive results. However we observe that REINFORCE some- times suffers from premature convergence to local optima as noted by Peters & Schaal (2008), which explains the per- formance gaps between REINFORCE and TNPG on tasks such as Walker (Figure 3(a)). By visualizing the final poli- cies, we can see that REINFORCE results in policies that tend to jump forward and fall over to maximize short-term return instead of acquiring a stable walking gait to max- imize long-term return. In Figure 3(b), we can observe that even with a small learning rate, steps taken by RE- INFORCE can sometimes result in large changes to policy distribution, which may explain the fast convergence to lo- cal optima. For the other tasks, we try both of the best hyperparame- ters found in the same category, and report the better per- formance of the two. This gives us insights into both the maximum possible performance when extensive hyperpa- rameter tuning is performed, and the robustness of the best hyperparameters across different tasks.
1604.06778#26
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
28
e h T . ) s m h t i r o g l a l l a s s o r c a e m a s ( s d e e s m o d n a r t n e r e f f i d e v fi r o f s n o i t a r e t i g n i n i a r t l l a r e v o n r u t e r e g a r e v a f o s m r e t n i s m h t i r o g l a d e t n e m e l p m i , ) 5 0 . 0 < p h t i w t s e t - t s ’ h c l e W ( t n e r e f f i d y l t n a c fi i n g i s y l l a c i t s i t a t s t o n e r a t a h t s e c n a m r o f r e p e v a h t a h t s m h t i r o g l a l l a s a l l e w s a , k s a t h c a e s n o i t a v r e s b o y s i o n r o f O N , s r o s n e s d e t i m i l r o f s d n a t s S L : s w
1604.06778#28
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
29
o y s i o n r o f O N , s r o s n e s d e t i m i l r o f s d n a t s S L : s w o l l o f s a d e t a t o n n a e r a s k s a t e h t f o s t n a i r a v e l b a v r e s b o y l l a i t r a p e h t , n m u l o c s k s a t e h t n I n i s r o r r e y r o m e m - f o - t u o o t g n i d a e l S E - A M C , . g . e , d n a h t a k s a t e h t n o d e l i a f s a h m h t i r o g l a n a t a h t s e t o n e d A N n o i t a t o n / e h T . s n o i t a c fi i t n e d i m e t s y s G P D D 8 . 7 8 ± 4 . 4 3 6 4 6 . 4 4 2 ± 0 . 0 4 3 . 0 7 1 ± 4 . 8 8 2 − 8 . 5 ± 6 . 3 2 2 - 0 . 4 5 1 ± 4 . 3 6 8
1604.06778#29
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
30
4 2 ± 0 . 0 4 3 . 0 7 1 ± 4 . 8 8 2 − 8 . 5 ± 6 . 3 2 2 - 0 . 4 5 1 ± 4 . 3 6 8 2 8 . 1 ± 8 . 5 8 5 . 3 4 ± 1 . 7 6 2 6 . 1 8 1 ± 4 . 8 1 3 7 . 2 0 7 ± 6 . 8 4 1 2 8 . 0 2 ± 2 . 6 2 3 1 . 8 2 ± 4 . 9 9 2 . 1 3 ± 0 . 9 1 1 S E - A M C 3 . 8 6 5 ± 4 . 0 4 4 2 7 . 5 ± 1 . 0 4 − 7 . 7 ± 0 . 5 8 − 1 . 3 1 ± 6 . 5 8 7 − 3 . 1 5 ± 1 . 6 7 5 1 4 . 1 ± 9 . 4 6 3 . 4 1 ± 3 . 0 2 3 . 4 2 ± 1 . 7 7 6 . 7 0 1 ± 3 . 1 4 4 5 . 5 1 ± 8 . 7 1 9 . 3 ± 7 . 8 2 A N / ± A N / 6 . 1 ± 0 . 8 6 4 . 3 ± 4 . 2 6 − 6 . 0 ± 2 . 3 7 - 5 . 7 ± 9 . 9 5 1 − 0 . 6 1 ± 4 . 4 0 1 8 . 2 ± 3 . 0 8 − 5 . 0 ± 5 . 3 7 − 2 . 6 ± 6
1604.06778#30
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
31
± 9 . 9 5 1 − 0 . 6 1 ± 4 . 4 0 1 8 . 2 ± 3 . 0 8 − 5 . 0 ± 5 . 3 7 − 2 . 6 ± 6 . 6 3 2 − 9 . 2 ± 6 . 1 7 M E C 8 . 4 ± 4 . 5 1 8 4 7 . 5 2 ± 2 . 8 3 4 . 2 ± 0 . 6 6 − 7 . 4 1 ± 8 . 6 3 4 − 9 . 8 7 1 ± 2 . 6 6 5 2 4 . 2 ± 8 . 8 6 8 . 7 ± 1 . 3 6 2 . 9 1 ± 5 . 4 8 8 . 4 7 2 ± 4 . 0 3 3 9 . 5 ± 2 . 9 4 9 . 2 1 ± 6 . 0 6 9 . 2 ± 9 . 6 3 0 . 3 2 2 ± 0 . 7 2 2 2 . 3 3 ± 2 . 1 8 − 3 . 1 ± 9 . 8 6 - 3 . 5 1 ± 5 . 9 4 1 − 1 . 2 3 ± 4 . 1 8 1 7 . 6 1 ± 6 . 5 5 − 4 . 1 ± 4 . 7 6 − 3 . 6 ± 4 . 3 1 2 − 2 . 3 9 ± 6 . 6 4 7 O P R T 6 . 7 3 ± 8 . 9 6 8 4 1 . 6 7 ± 2 . 7 4 2 9 . 0 ± 7 . 1 6 - 4 . 4 2 ± 0 .
1604.06778#31
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
32
P R T 6 . 7 3 ± 8 . 9 6 8 4 1 . 6 7 ± 2 . 7 4 2 9 . 0 ± 7 . 1 6 - 4 . 4 2 ± 0 . 6 2 3 − 4 . 0 5 ± 4 . 2 1 4 4 2 . 0 ± 0 . 6 9 0 . 0 5 1 ± 3 . 3 8 1 1 0 . 5 8 ± 8 . 3 5 3 1 1 . 0 2 1 ± 0 . 4 1 9 1 3 . 1 6 ± 2 . 0 3 7 3 . 0 4 ± 7 . 9 6 2 4 . 3 2 ± 0 . 7 8 2 0 . 6 4 ± 2 . 0 6 9 1 . 4 ± 5 . 4 5 . 9 ± 2 . 4 6 - 9 . 9 ± 3 . 3 8 - 2 . 2 2 1 ± 2 . 6 0 6 2 . 2 ± 4 . 0 1 0 . 2 ± 2 . 0 6 - 6 . 8 ± 6 . 9 4 1 - 1 . 5 ± 3 . 0 8 9 S P E R R W R 6 . 7 3 1 ± 6 . 5 6 5 3 . 2 1 ± 5 . 1 6 8 4 6 . 4 ± 3 . 3 1 1 − 8 . 3 1 ± 7 . 4 8 3 . 6 6 1 ± 6 . 5 7 2 − 1 . 1 ± 4 . 9 7 − 8 . 0 1 ± 5 . 1 0 0 1 − 9 . 5 3 ± 7 . 2
1604.06778#32
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
33
. 6 6 1 ± 6 . 5 7 2 − 1 . 1 ± 4 . 9 7 − 8 . 0 1 ± 5 . 1 0 0 1 − 9 . 5 3 ± 7 . 2 5 3 − 8 . 4 1 1 ± 7 . 6 4 4 1 . 8 6 3 ± 8 . 4 1 6 3 3 . 3 ± 8 . 3 5 . 5 ± 7 . 0 6 6 . 7 1 ± 7 . 6 8 0 . 1 7 ± 2 . 3 5 5 1 . 8 3 ± 0 . 7 3 − 9 . 5 1 ± 0 . 6 3 1 0 . 8 3 ± 5 . 4 3 2 . 8 2 ± 1 . 6 7 3 8 . 9 ± 0 . 9 3 1 . 3 ± 6 . 7 3 7 . 4 ± 3 . 8 2 4 . 7 1 ± 3 . 3 9 1 . 6 ± 7 . 1 4 6 . 5 ± 7 . 6 4 1 . 2 2 ± 1 . 8 9 8 5 . 1 ± 9 . 8 6 0 . 8 ± 2 . 7 8 − 2 . 0 ± 4 . 7 0 1 − 4 . 0 ± 6 . 2 8 − 1 . 0 ± 7 . 1 8 − 4 . 1 ± 5 . 9 7 3 − 3 . 5 ± 9 . 5 3 2 − 2 . 7 ± 6 . 9 9 2 . 1 ± 8 . 3 9 2 . 4 ± 3 . 9 1 1 − 4 . 1 ± 0 . 0 1 1 −
1604.06778#33
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
34
5 3 2 − 2 . 7 ± 6 . 9 9 2 . 1 ± 8 . 3 9 2 . 4 ± 3 . 9 1 1 − 4 . 1 ± 0 . 0 1 1 − 1 . 0 ± 9 . 2 8 − 1 . 0 ± 7 . 1 8 − 0 . 4 1 ± 5 . 8 5 2 − 4 . 0 ± 1 . 3 3 2 − 4 . 6 9 1 ± 4 . 2 0 7 8 . 2 ± 0 . 9 6 G P N T 9 . 8 4 7 ± 4 . 6 8 9 3 5 . 5 5 ± 7 . 9 0 2 5 . 4 ± 5 . 6 6 - 2 . 1 2 1 ± 8 . 5 9 3 − 6 . 7 3 ± 4 . 5 5 4 4 2 . 0 ± 0 . 6 9 9 . 7 5 ± 1 . 5 5 1 1 2 . 8 0 1 ± 6 . 2 8 3 1 6 . 4 8 1 ± 5 . 9 2 7 1 7 . 7 2 1 ± 0 . 6 0 7 5 . 4 2 ± 0 . 5 5 2 2 . 5 2 ± 4 . 8 8 2 8 . 7 2 ± 1 . 5 4 9 1 . 6 ± 7 . 0 0 . 9 ± 7 . 5 6 - 9 . 2 ± 6 . 4 8 - 0 . 3 2 ± 3 . 6 1 9 5 . 0 ± 5 . 1 1 6 . 8 ± 5 . 4 6 - 4 . 3 1 ± 5 .
1604.06778#34
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
35
± 6 . 4 8 - 0 . 3 2 ± 3 . 6 1 9 5 . 0 ± 5 . 1 1 6 . 8 ± 5 . 4 6 - 4 . 3 1 ± 5 . 4 6 1 - 3 . 7 ± 5 . 0 8 9 E C R O F N I E R 0 . 4 1 ± 7 . 3 9 6 4 0 . 8 1 ± 4 . 3 1 0 . 1 ± 1 . 7 6 − 0 . 1 9 ± 1 . 8 0 5 − 2 . 5 6 ± 5 . 6 1 1 4 1 . 0 ± 3 . 2 9 3 . 9 2 ± 0 . 4 1 7 8 . 8 7 ± 5 . 6 0 5 2 . 9 6 ± 1 . 3 8 1 1 5 . 5 5 ± 3 . 8 4 5 0 . 4 3 ± 1 . 8 2 1 5 . 0 1 ± 2 . 2 6 2 5 . 5 6 2 ± 9 . 0 2 4 2 . 3 ± 4 . 3 1 − 6 . 0 ± 2 . 1 8 − 6 . 1 1 ± 9 . 8 2 1 − 8 . 0 1 2 ± 0 . 6 1 6 1 . 1 ± 5 . 6 8 . 7 ± 7 . 4 7 − 3 . 1 3 ± 7 . 6 8 1 - 1 . 4 7 2 ± 7 . 1 3 4 m o d n a R 0 . 0 ± 1 . 7 7 2 . 0 ± 4 . 3 5 1 − 0 . 0 ±
1604.06778#35
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
36
- 1 . 4 7 2 ± 7 . 1 3 4 m o d n a R 0 . 0 ± 1 . 7 7 2 . 0 ± 4 . 3 5 1 − 0 . 0 ± 4 . 5 1 4 − 0 . 1 ± 5 . 4 0 9 1 − 1 . 0 ± 7 . 9 4 1 1 . 0 ± 7 . 1 − 0 . 0 ± 4 . 8 0 . 0 ± 7 . 1 − 3 . 0 ± 8 . 0 9 − 7 . 0 ± 4 . 3 1 2 . 0 ± 5 . 1 4 1 . 0 ± 2 . 3 1 0 . 0 ± 1 . 7 7 1 . 0 ± 1 . 2 2 1 − 0 . 0 ± 0 . 3 8 − 0 . 0 ± 2 . 3 9 3 − 1 . 0 ± 4 . 1 0 1 1 . 0 ± 2 . 2 2 1 − 0 . 0 ± 0 . 3 8 − 0 . 0 ± 5 . 3 9 3 − 1 . 0 ± 3 . 6 7 8 . 4 ± 1 . 3 6 − 6 . 0 1 ± 8 . 1 5 − 9 . 0 ± 1 . 4 1 9 . 3 2 ± 8 . 2 9 − 7 . 4 ± 7 . 8 0 1 − 7 . 1 ± 8 . 4 1 6 . 5 ± 3 . 5 − 2 . 0 ± 8 . 1 2 1 − 6 . 0 ± 9 . 6 6 − 0 . 1 ± 9 . 3 6 − 4 . 0
1604.06778#36
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
37
1 6 . 5 ± 3 . 5 − 2 . 0 ± 8 . 1 2 1 − 6 . 0 ± 9 . 6 6 − 0 . 1 ± 9 . 3 6 − 4 . 0 ± 6 . 1 6 - 3 . 2 ± 7 . 0 8 − 1 . 0 ± 4 . 1 8 − 4 . 0 ± 8 . 1 6 - 2 . 0 ± 9 . 3 6 − 0 . 0 ± 7 . 2 8 − 5 . 5 ± 0 . 5 4 2 − 7 . 3 1 ± 2 . 0 5 2 − 3 . 0 4 ± 9 . 0 7 1 - 7 . 7 ± 1 . 6 1 2 − 6 . 2 ± 2 . 3 3 2 − 9 . 8 3 ± 6 . 6 5 1 - 3 . 2 3 ± 1 . 9 6 1 - 0 . 1 ± 8 . 7 8 3 − 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 3 . 0 ± 3 . 0 − A N / ± A N / 7 . 0 ± 7 . 4 − 0 . 0 ± 4 . 0 − 7 . 0 ± 7 . 6 − 5 . 0 ± 5 . 5 − 1 . 0 ± 4 . 0 − 1 . 0 ± 1 .
1604.06778#37
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
39
# e h t # f o e c n a m r o f r e P . 1 e l b a T n o m h t i r o g l a g n i m r o f r e p - t s e b # e h t # f o a . e c a f d l o b # n i d e t h g i l h g i h # r o f # I S d n a , s n o i t c a d e y a l e d . k s a t d i o n a m u H # l l u F k s a T g n i c n a l a B e l o P - t r a C m u l u d n e P d e t r e v n I r a C n i a t n u o M t o b o r c A m u l u d n e P d e t r e v n I # e l b u o D r e m m w S # i # r e p p o H r e k l a # W D 2 h a t e e h C f l a H t n A d i o n a m u H e l p m i S d i o n a m u H # l l u F ) S L ( g n i c n a l a B e l o P - t r a C ) S L (
1604.06778#39
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
40
d i o n a m u H # l l u F ) S L ( g n i c n a l a B e l o P - t r a C ) S L ( m u l u d n e P d e t r e v n I ) S L ( r a C n i a t n u o M ) S L ( t o b o r c A ) O N ( g n i c n a l a B e l o P - t r a C (ON) (ON) # O N ( ) # O N m u l u d n e P d e t r e v n I ( r a C n i a t n u o M ) O N ( t o b o r c A ) I S ( g n i c n a l a B e l o P - t r a C ) I S ( m u l u d n e P d e t r e v n I ) I S ( r a C n i a t n u o M ) I S ( t o b o r c A g n i r e h t a G + r e m m w S # i g n i r e h t a G + t n A e z a # M + r e m m w S # i e z a # azey M + t n A s k s a t l a c i h c r a r e i h
1604.06778#40
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
41
# M + r e m m w S # i e z a # azey M + t n A s k s a t l a c i h c r a r e i h # e h t # r o f t p e c x E a Benchmarking Deep Reinforcement Learning for Continuous Control (a) (b) (c) (d) Figure 3. Performance as a function of the number of iterations; the shaded area depicts the mean ± the standard deviation over five different random seeds: (a) Performance comparison of all algorithms in terms of the average reward on the Walker task; (b) Comparison between REINFORCE, TNPG, and TRPO in terms of the mean KL-divergence on the Walker task; (c) Performance comparison on TNPG and TRPO on the Swimmer task; (d) Performance comparison of all algorithms in terms of the average reward on the Half- Cheetah task.
1604.06778#41
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
42
policy update by performing a line search in the natural gra- dient direction to ensure an improvement in the surrogate loss function. We observe that hyperparameter grid search tends to select conservative step sizes (δKL) for TNPG, which alleviates the issue of performance collapse caused by a large update to the policy. By contrast, TRPO can robustly enforce constraints with larger a δKL value and hence speeds up learning in some cases. For instance, grid search on the Swimmer task reveals that the best step size for TNPG is δKL = 0.05, whereas TRPO’s best step-size is larger: δKL = 0.1. As shown in Figure 3(c), this larger step size enables slightly faster learning.
1604.06778#42
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
43
tain basic tasks such as Cart-Pole Balancing and Moun- tain Car, suggesting that the dimension of the searching parameter is not always the limiting factor of the method. However, the performance degrades quickly as the system dynamics becomes more complicated. We also observe that CEM outperforms CMA-ES, which is remarkable as CMA-ES estimates the full covariance matrix. For higher- dimensional policy parameterizations, the computational complexity and memory requirement for CMA-ES become noticeable. On tasks with high-dimensional observations, such as the Full Humanoid, the CMA-ES algorithm runs out of memory and fails to yield any results, denoted as N/A in Table 1. RWR: RWR is the only gradient-based algorithm we im- plemented that does not require any hyperparameter tun- ing. It can solve some basic tasks to a satisfactory degree, but fails to solve more challenging tasks such as locomo- tion. We observe empirically that RWR shows fast initial improvement followed by significant slow-down, as shown in Figure 3(d).
1604.06778#43
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
44
REPS: Our main observation is that REPS is especially prone to early convergence to local optima in case of con- tinuous states and actions. Its final outcome is greatly af- fected by the performance of the initial policy, an obser- vation that is consistent with the original work of Peters et al. (2010). This leads to a bad performance on average, although under particular initial settings the algorithm can perform on par with others. Moreover, the tasks presented here do not assume the existence of a stationary distribu- tion, which is assumed in Peters et al. (2010). In particular, for many of our tasks, transient behavior is of much greater interest than steady-state behavior, which agrees with pre- vious observation by van Hoof et al. (2015),
1604.06778#44
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
45
Gradient-free methods: Surprisingly, even when train- ing deep neural network policies with thousands of pa- rameters, CEM achieves very good performance on cerDDPG: Compared to batch algorithms, we found that DDPG was able to converge significantly faster on certain tasks like Half-Cheetah due to its greater sample efficiency. However, it was less stable than batch algorithms, and the performance of the policy can degrade significantly during training. We also found it to be more susceptible to scaling of the reward. In our experiment for DDPG, we rescaled the reward of all tasks by a factor of 0.1, which seems to improve the stability. Partially Observable Tasks: We experimentally verify that recurrent policies can find better solutions than feed- forward policies in Partially Observable Tasks but recur- rent policies are also more difficult to train. As shown in Table 1, derivative-free algorithms like CEM and CMA-ES work considerably worse with recurrent policies. Also we note that the performance gap between REINFORCE and TNPG widens when they are applied to optimize recurrent policies, which can be explained by the fact that a small change in parameter space can result in a bigger change in policy distribution with recurrent policies than with feed- forward policies.
1604.06778#45
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
46
Hierarchical Tasks: We observe that all of our impleBenchmarking Deep Reinforcement Learning for Continuous Control mented algorithms achieve poor performance on the hier- archical tasks, even with extensive hyperparameter search and 500 iterations of training. It is an interesting direction to develop algorithms that can automatically discover and exploit the hierarchical structure in these tasks. # 7. Related Work In this section, we review existing benchmarks of con- tinuous control tasks. The earliest efforts of evaluating reinforcement learning algorithms started in the form of individual control problems described in symbolic form. Some widely adopted tasks include the inverted pendu- lum (Stephenson, 1908; Donaldson, 1960; Widrow, 1964), mountain car (Moore, 1990), and Acrobot (DeJong & Spong, 1994). These problems are frequently incorporated into more comprehensive benchmarks.
1604.06778#46
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
47
Some reinforcement learning benchmarks contain low- dimensional continuous control tasks, such as the ones introduced above, including RLLib (Abeyruwan, 2013), MMLF (Metzen & Edgington, 2011), RL-Toolbox (Neu- mann, 2006), JRLF (Kochenderfer, 2006), Beliefbox (Dim- itrakakis et al., 2007), Policy Gradient Toolbox (Peters, 2002), and ApproxRL (Busoniu, 2010). A series of RL competitions has also been held in recent years (Dutech et al., 2005; Dimitrakakis et al., 2014), again with relatively low-dimensional actions. In contrast, our benchmark con- tains a wider range of tasks with high-dimensional contin- uous state and action spaces. variety of challenging tasks. We implemented several rein- forcement learning algorithms, and presented them in the context of general policy parameterizations. Results show that among the implemented algorithms, TNPG, TRPO, and DDPG are effective methods for training deep neural network policies. Still, the poor performance on the pro- posed hierarchical tasks calls for new algorithms to be de- veloped. Implementing and evaluating existing and newly proposed algorithms will be our continued effort. By pro- viding an open-source release of the benchmark, we en- courage other researchers to evaluate their algorithms on the proposed tasks. # Acknowledgements
1604.06778#47
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
48
# Acknowledgements We thank Emo Todorov and Yuval Tassa for providing the MuJoCo simulator, and Sergey Levine, Aviv Tamar, Chelsea Finn, and the anonymous ICML reviewers for in- sightful comments. We also thank Shixiang Gu and Timo- thy Lillicrap for helping us diagnose the DDPG implemen- tation. This work was supported in part by DARPA, the Berkeley Vision and Learning Center (BVLC), the Berke- ley Artificial Intelligence Research (BAIR) laboratory, and Berkeley Deep Drive (BDD). Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flan- ders (FWO). # References
1604.06778#48
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
49
# References Previously, other benchmarks have been proposed for high- dimensional control tasks. Tdlearn (Dann et al., 2014) includes a 20-link pole balancing task, DotRL (Papis & Wawrzy´nski, 2013) includes a variable-DOF octopus arm and a 6-DOF planar cheetah model, PyBrain (Schaul et al., 2010) includes a 16-DOF humanoid robot with standing and jumping tasks, RoboCup Keepaway (Stone et al., 2005) is a multi-agent game which can have a flexible dimension of actions by varying the number of agents, and SkyAI (Yamaguchi & Ogasawara, 2010) includes a 17-DOF hu- manoid robot with crawling and turning tasks. Other li- braries such as CL-Square (Riedmiller et al., 2012) and RLPark (Degris et al., 2013) provide interfaces to actual hardware, e.g., Bioloid and iRobot Create. In contrast to these aforementioned testbeds, our benchmark makes use of simulated environments to reduce computation time and to encourage experimental reproducibility. Furthermore, it provides a much larger collection of tasks of varying diffi- culty.
1604.06778#49
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
50
Abeyruwan, S. RLLib: Lightweight standard and on/off policy reinforcement learning library (C++). http://web.cs.miami. edu/home/saminda/rilib.html, 2013. Bagnell, J. A. and Schneider, J. Covariant policy search. pp. 1019–1024. IJCAI, 2003. Bakker, B. Reinforcement learning with long short-term memory. In NIPS, pp. 1475–1482, 2001. Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. The Arcade Learning Environment: An evaluation platform for general agents. J. Artif. Intell. Res., 47:253–279, 2013. Bellman, R. Dynamic Programming. Princeton University Press, 1957. Bertsekas, Dimitri P and Tsitsiklis, John N. Neuro-dynamic pro- gramming: an overview. In CDC, pp. 560–564, 1995. Busoniu, L. ApproxRL: A Matlab toolbox for approxi- http://busoniu.net/files/repository/ mate RL and DP. readme-approxrl.html, 2010. Catto, E. Box2D: A 2D physics engine for games, 2011.
1604.06778#50
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
51
Catto, E. Box2D: A 2D physics engine for games, 2011. Coulom, R´emi. Reinforcement learning using neural networks, with applications to motor control. PhD thesis, Institut Na- tional Polytechnique de Grenoble-INPG, 2002. # 8. Conclusion Dann, C., Neumann, G., and Peters, J. Policy evaluation with tem- poral differences: A survey and comparison. J. Mach. Learn. Res., 15(1):809–883, 2014. In this work, a benchmark of continuous control problems for reinforcement learning is presented, covering a wide Degris, T., B´echu, J., White, A., Modayil, J., Pilarski, P. M., and Denk, C. RLPark. http://rlpark.github.io, 2013. Benchmarking Deep Reinforcement Learning for Continuous Control Deisenroth, M. P., Neumann, G., and Peters, J. A survey on policy search for robotics, foundations and trends in robotics. Found. Trends Robotics, 2(1-2):1–142, 2013.
1604.06778#51
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
52
Heess, N., Wayne, G., Silver, D., Lillicrap, T., Erez, T., and Tassa, T. Learning continuous control policies by stochastic value gradients. In NIPS, pp. 2926–2934. 2015b. DeJong, G. and Spong, M. W. Swinging up the Acrobot: An example of intelligent control. In ACC, pp. 2158–2162, 1994. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In CVPR, pp. 248–255, 2009. Dietterich, T. G. Hierarchical reinforcement learning with the MAXQ value function decomposition. J. Artif. Intell. Res, 13: 227–303, 2000. Dimitrakakis, C., Tziortziotis, N., and Tossou, A. Beliefbox: A framework for statistical methods in sequential decision mak- ing. http://code.google.com/p/beliefbox/, 2007.
1604.06778#52
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
53
Hester, T. and Stone, P. The open-source TEXPLORE code re- lease for reinforcement learning on robots. In RoboCup 2013: Robot World Cup XVII, pp. 536–543. 2013. Hinton, G., Deng, L., Yu, D., Mohamed, A.-R., Jaitly, N., Se- nior, A., Vanhoucke, V., Nguyen, P., Dahl, T. S. G., and Kings- bury, B. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Process. Mag, 29(6):82–97, 2012. Hirsch, H.-G. and Pearce, D. The Aurora experimental framework for the performance evaluation of speech recognition systems under noisy conditions. In ASR2000-Automatic Speech Recog- nition: Challenges for the new Millenium ISCA Tutorial and Research Workshop (ITRW), 2000. Dimitrakakis, Christos, Li, Guangliang, and Tziortziotis, Nikoa- los. The reinforcement learning competition 2014. AI Maga- zine, 35(3):61–65, 2014. Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural Comput., 9(8):1735–1780, 1997.
1604.06778#53
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
54
Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural Comput., 9(8):1735–1780, 1997. Donaldson, P. E. K. Error decorrelation: a technique for matching a class of functions. In Proc. 3th Intl. Conf. Medical Electron- ics, pp. 173–178, 1960. Doya, K. Reinforcement learning in continuous time and space. Neural Comput., 12(1):219–245, 2000. Kakade, S. M. A natural policy gradient. In NIPS, pp. 1531–1538. 2002. Kimura, H. and Kobayashi, S. Stochastic real-valued reinforce- ment learning to solve a nonlinear control problem. In IEEE SMC, pp. 510–515, 1999. Dutech, Alain, Edmunds, Timothy, Kok, Jelle, Lagoudakis, Michail, Littman, Michael, Riedmiller, Martin, Russell, Bryan, Scherrer, Bruno, Sutton, Richard, Timmer, Stephan, et al. Re- inforcement learning benchmarks and bake-offs ii. Advances in Neural Information Processing Systems (NIPS), 17, 2005. Infinite hori- zon model predictive control for nonlinear periodic tasks. Manuscript under review, 4, 2011.
1604.06778#54
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
55
Infinite hori- zon model predictive control for nonlinear periodic tasks. Manuscript under review, 4, 2011. Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J., and Zisserman, A. The pascal visual object classes (VOC) chal- lenge. Int. J. Comput. Vision, 88(2):303–338, 2010. Kober, J. and Peters, J. Policy search for motor primitives in robotics. In NIPS, pp. 849–856, 2009. Kochenderfer, M. JRLF: Java reinforcement learning framework. http://mykel.kochenderfer.com/jrlf, 2006. Krizhevsky, A. and Hinton, G. Learning multiple layers of fea- tures from tiny images. Technical report, 2009. Krizhevsky, A., Sutskever, I., and Hinton, G. ImageNet classifi- cation with deep convolutional neural networks. In NIPS, pp. 1097–1105. 2012. LeCun, Y., Cortes, C., and Burges, C. The MNIST database of handwritten digits, 1998.
1604.06778#55
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
56
LeCun, Y., Cortes, C., and Burges, C. The MNIST database of handwritten digits, 1998. Fei-Fei, L., Fergus, R., and Perona, P. One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell., 28(4):594– 611, 2006. Levine, S. and Koltun, V. Guided policy search. In ICML, pp. 1–9, 2013. Furuta, K., Okutani, T., and Sone, H. Computer control of a double inverted pendulum. Comput. Electr. Eng., 5(1):67–84, 1978. Garofolo, J. S., Lamel, L. F., Fisher, W. M., Fiscus, J. G., and Pal- lett, D. S. DARPA TIMIT acoustic-phonetic continuous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI/Recon Technical Report N, 93, 1993. Godfrey, J. J., Holliman, E. C., and McDaniel, J. SWITCH- BOARD: Telephone speech corpus for research and develop- ment. In ICASSP, pp. 517–520, 1992.
1604.06778#56
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
57
Gomez, F. and Miikkulainen, R. 2-d pole balancing with recurrent evolutionary networks. In ICANN, pp. 425–430. 1998. Guo, X., Singh, S., Lee, H., Lewis, R. L., and Wang, X. Deep learning for real-time Atari game play using offline monte- carlo tree search planning. In NIPS, pp. 3338–3346. 2014. Hansen, N. and Ostermeier, A. Completely derandomized self- adaptation in evolution strategies. Evol. Comput., 9(2):159– 195, 2001. Levine, S., Finn, C., Darrell, T., and Abbeel, P. End-to-end train- ing of deep visuomotor policies. arXiv:1504.00702, 2015. Lillicrap, T., Hunt, J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. Continuous control with deep re- inforcement learning. arXiv:1509.02971, 2015.
1604.06778#57
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
58
Martin, D., C. Fowlkes, D. Tal, and Malik, J. A database of human segmented natural images and its application to evaluating seg- mentation algorithms and measuring ecological statistics. In ICCV, pp. 416–423, 2001. Metzen, J. M. and Edgington, M. Maja machine learning frame- work. http://mloss.org/software/view/220/, 2011. Michie, D. and Chambers, R. A. BOXES: An experiment in adap- tive control. Machine Intelligence, 2:137–152, 1968. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
1604.06778#58
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
59
Heess, N., Hunt, J., Lillicrap, T., and Silver, D. Memory-based arXiv:1512.04455, control with recurrent neural networks. 2015a. Moore, A. Efficient memory-based learning for robot control. Technical report, University of Cambridge, Computer Labora- tory, 1990. Benchmarking Deep Reinforcement Learning for Continuous Control Murray, R. M. and Hauser, J. A case study in approximate lin- earization: The Acrobot example. Technical report, UC Berke- ley, EECS Department, 1991. mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1):181– 211, 1999. Murthy, S. S. and Raibert, M. H. 3D balance in legged locomo- tion: modeling and simulation for the one-legged case. ACM SIGGRAPH Computer Graphics, 18(1):27–27, 1984. Neumann, G. A reinforcement learning toolbox and RL bench- marks for the control of dynamical systems. Dynamical prin- ciples for neuroscience and intelligent biomimetic devices, pp. 113, 2006.
1604.06778#59
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
60
Papis, B. and Wawrzy´nski, P. dotrl: A platform for rapid rein- forcement learning methods development and validation. In FedCSIS, pp. pages 129–136., 2013. Parr, Ronald and Russell, Stuart. Reinforcement learning with hierarchies of machines. Advances in neural information pro- cessing systems, pp. 1043–1049, 1998. Szita, I. and L˝orincz, A. Learning Tetris using the noisy cross- entropy method. Neural Comput., 18(12):2936–2941, 2006. Szita, I., Tak´acs, B., and L¨orincz, A. ε-MDPs: Learning in vary- ing environments. J. Mach. Learn. Res., 3:145–174, 2003. Tassa, Yuval, Erez, Tom, and Todorov, Emanuel. Synthesis and stabilization of complex behaviors through online trajectory optimization. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 4906–4913. IEEE, 2012. Tesauro, G. Temporal difference learning and TD-Gammon. Commun. ACM, 38(3):58–68, 1995.
1604.06778#60
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
61
Tesauro, G. Temporal difference learning and TD-Gammon. Commun. ACM, 38(3):58–68, 1995. Todorov, E., Erez, T., and Tassa, Y. MuJoCo: A physics engine for model-based control. In IROS, pp. 5026–5033, 2012. http://www.ausy. tu-darmstadt.de/Research/PolicyGradientToolbox, 2002. Peters, J. and Schaal, S. Reinforcement learning by reward- In ICML, weighted regression for operational space control. pp. 745–750, 2007. Peters, J. and Schaal, S. Reinforcement learning of motor skills with policy gradients. Neural networks, 21(4):682–697, 2008. Peters, J., Vijaykumar, S., and Schaal, S. Policy gradient methods for robot control. Technical report, 2003. Peters, J., M¨ulling, K., and Alt¨un, Y. Relative entropy policy search. In AAAI, pp. 1607–1612, 2010. Purcell, E. M. Life at low Reynolds number. Am. J. Phys, 45(1): 3–11, 1977.
1604.06778#61
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
62
Purcell, E. M. Life at low Reynolds number. Am. J. Phys, 45(1): 3–11, 1977. van Hoof, H., Peters, J., and Neumann, G. Learning of non- parametric control policies with high-dimensional state fea- tures. In AISTATS, pp. 995–1003, 2015. Watter, M., Springenberg, J., Boedecker, J., and Riedmiller, M. Embed to control: A locally linear latent dynamics model for control from raw images. In NIPS, pp. 2728–2736, 2015. Wawrzy´nski, P. Learning to control a 6-degree-of-freedom walk- ing robot. In IEEE EUROCON, pp. 698–705, 2007. Widrow, B. Pattern recognition and adaptive control. IEEE Trans. Ind. Appl., 83(74):269–277, 1964. Wierstra, D., Foerster, A., Peters, J., and Schmidhuber, J. Solv- ing deep memory POMDPs with recurrent policy gradients. In ICANN, pp. 697–706. 2007.
1604.06778#62
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
63
Raibert, M. H. and Hodgins, J. K. Animation of dynamic legged In ACM SIGGRAPH Computer Graphics, vol- locomotion. ume 25, pp. 349–358, 1991. Williams, R. J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn., 8: 229–256, 1992. Riedmiller, M., Blum, M., and Lampe, T. CLS2: Closed loop http://ml.informatik.uni-freiburg.de/ simulation system. research/clsquare, 2012. Yamaguchi, A. and Ogasawara, T. SkyAI: Highly modularized reinforcement learning library. In IEEE-RAS Humanoids, pp. 118–123, 2010. Rubinstein, R. The cross-entropy method for combinatorial and continuous optimization. Methodol. Comput. Appl. Probab., 1 (2):127–190, 1999. Yu, D., Ju, Y.-C., Wang, Y.-Y., Zweig, G., and Acero, A. Auto- mated directory assistance system - from theory to practice. In Interspeech, pp. 2709–2712, 2007.
1604.06778#63
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
64
Sch¨afer, A. M. and Udluft, S. Solving partially observable rein- forcement learning problems with recurrent neural networks. In ECML Workshops, pp. 71–81, 2005. Schaul, T., Bayer, J., Wierstra, D., Sun, Y., Felder, M., Sehnke, F., R¨uckstieß, T., and Schmidhuber, J. PyBrain. J. Mach. Learn. Res., 11:743–746, 2010. Schulman, J., Levine, S., Abbeel, P., Jordan, M. I., and Moritz, P. Trust region policy optimization. In ICML, pp. 1889–1897, 2015a. Schulman, J., Moritz, P., Levine, S., Jordan, M. I., and Abbeel, P. High-dimensional continuous control using generalized ad- vantage estimation. arXiv:1506.02438, 2015b. Stephenson, A. On induced stability. Philos. Mag., 15(86):233– 236, 1908. Stone, Peter, Kuhlmann, Gregory, Taylor, Matthew E, and Liu, Yaxin. Keepaway soccer: From machine learning testbed to benchmark. In RoboCup 2005: Robot Soccer World Cup IX, pp. 93–105. Springer, 2005.
1604.06778#64
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
65
Sutton, Richard S, Precup, Doina, and Singh, Satinder. Between # Supplementary Material # 1. Task Specifications Below we provide some specifications for the task observations, actions, and rewards. Please refer to the benchmark source code (https://github.com/rllab/rllab) for complete specification of physics parameters. # 1.1. Basic Tasks Cart-Pole Balancing: In this task, an inverted pendulum is mounted on a pivot point on a cart. The cart itself is restricted to linear movement, achieved by applying horizontal forces. Due to the system’s inherent instability, continuous cart movement is needed to keep the pendulum upright. The observation consists of the cart position x, pole angle @, the cart velocity «, and the pole velocity 6. The 1D action consists of the horizontal force applied to the cart body. The reward function is given by r(s, a) := 10 — (1 — cos(@)) — 1075 |la||3. The episode terminates when |x| > 2.4 or |0| > 0.2.
1604.06778#65
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
66
Cart-Pole Swing Up: This is a more complicated version of the previous task, in which the system should not only be able to balance the pole, but first succeed in swinging it up into an upright position. This task extends the working range of the inverted pendulum to 360◦. This is a nonlinear extension of the previous task. It has the same observation and action as in balancing. The reward function is given by r(s, a) := cos(θ). The episode terminates when |x| > 3, with a penalty of −100.
1604.06778#66
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
67
Mountain Car: In this task, a car has to escape a valley by repetitive application of tangential forces. Because the maximal tangential force is limited, the car has to alternately drive up along the two slopes of the valley in order to build up enough inertia to overcome gravity. This brings a challenge of exploration, since before first reaching the goal among all trials, a locally optimal solution exists, which is to drive to the point closest to the target and stay there for the rest of the episode. The observation is given by the horizontal position x and the horizontal velocity ˙x of the car. The reward is given by r(s, a) := −1 + height, with height the car’s vertical offset. The episode terminates when the car reaches a target height of 0.6. Hence the goal is to reach the target as soon as possible.
1604.06778#67
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
68
Acrobot Swing Up: In this task, an under-actuated, two-link robot has to swing itself into an upright position. It consists of two joints of which the first one has a fixed position and only the second one can exert torque. The goal is to swing the robot into an upright position and stabilize around that position. The controller not only has to swing the pendulum in order to build up inertia, similar to the Mountain Car task, but also has to decelerate it in order to prevent it from tipping over. The observation includes the two joint angles, 0; and 62, and their velocities, 6; and 02. The action is the torque applied at the second joint. The reward is defined as r(s, a) := —||tip(s) — tipyarget||2, Where tip(s) computes the Cartesian position of the tip of the robot given the joint angles. No termination condition is applied.
1604.06778#68
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
69
Double Inverted Pendulum Balancing: This task extends the Cart-Pole Balancing task by replacing the single-link pole by a two-link rigid structure. As in the former task, the goal is to stabilize the two-link pole near the upright position. This task is more difficult than single-pole balancing, since the system is even more unstable and requires the controller to actively maintain balance. The observation includes the cart position x, joint angles (θ1 and θ2), and joint velocities ( ˙θ1 and ˙θ2). We encode each joint angle as its sine and cosine values. The action is the same as in cart-pole tasks. The reward is given by r(s, a) = 10 − 0.01x2 2, where xtip, ytip are the coordinates of the tip of the pole. No termination condition is applied. The episode is terminated when ytip ≤ 1. # 1.2. Locomotion Tasks
1604.06778#69
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
70
# 1.2. Locomotion Tasks Swimmer: The swimmer is a planar robot with 3 links and 2 actuated joints. Fluid is simulated through viscosity forces, which apply drag on each link, allowing the swimmer to move forward. This task is the simplest of all locomotion tasks, since there are no irrecoverable states in which the swimmer can get stuck, unlike other robots which may fall down or flip over. This places less burden on exploration. The 13-dim observation includes the joint angles, joint velocities, as well as Benchmarking Deep Reinforcement Learning for Continuous Control the coordinates of the center of mass. The reward is given by r(s,a) = v, — 0.005||a||3, where v,, is the forward velocity. No termination condition is applied.
1604.06778#70
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
71
Hopper: The hopper is a planar monopod robot with 4 rigid links, corresponding to the torso, upper leg, lower leg, and foot, along with 3 actuated joints. More exploration is needed than the swimmer task, since a stable hopping gait has to be learned without falling. Otherwise, it may get stuck in a local optimum of diving forward. The 20-dim observation includes joint angles, joint velocities, the coordinates of center of mass, and constraint forces. The reward is given by r(s,a) := vz, — 0.005 - |ja||3 + 1, where the last term is a bonus for being “alive.” The episode is terminated when Zbody < 0.7 where Zpody is the z-coordinate of the body, or when |6,| < 0.2, where @, is the forward pitch of the body.
1604.06778#71
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
72
Walker: The walker is a planar biped robot consisting of 7 links, corresponding to two legs and a torso, along with 6 actuated joints. This task is more challenging than hopper, since it has more degrees of freedom, and is also prone to falling. The 21-dim observation includes joint angles, joint velocities, and the coordinates of center of mass. The reward is given by r(s,a) := vz — 0.005 - ||a\)3. The episode is terminated when zpoay < 0-8, 2body > 2.0, or when |0,| > 1.0. Half-Cheetah: The half-cheetah is a planar biped robot with 9 rigid links, including two legs and a torso, along with 6 actuated joints. The 20-dim observation includes joint angles, joint velocities, and the coordinates of the center of mass. The reward is given by r(s,a) = vz — 0.05 - ||a||3. No termination condition is applied.
1604.06778#72
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
73
Ant: The ant is a quadruped with 13 rigid links, including four legs and a torso, along with 8 actuated joints. This task is more challenging than the previous tasks due to the higher degrees of freedom. The 125-dim observation includes joint angles, joint velocities, coordinates of the center of mass, a (usually sparse) vector of contact forces, as well as the rotation matrix for the body. The reward is given by r(s,a) = vz — 0.005 - |a||} — Coontact + 0.05, where Coontact penalizes contacts to the ground, and is given by 5 - 10-4 . Frontact||3, where Feontact is the contact force vector clipped to values between —1 and 1. The episode is terminated when z,ay < 0.2 or when Zpody > 1.0.
1604.06778#73
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
74
Simple Humanoid: This is a simplified humanoid model with 13 rigid links, including the head, body, arms, and legs, along with 10 actuated joints. The increased difficulty comes from the increased degrees of freedom as well as the need to maintain balance. The 102-dim observation includes the joint angles, joint velocities, vector of contact forces, and the coordinates of the center of mass. The reward is given by r(s,a) = v, —5-10~4|lal]3 — Coontact — Caeviation + 0-2, where Ccoontact = 5+ 107° - || Feontactl|, and Cueviation = 5- 107° - (v3 + v2) to penalize deviation from the forward direction. The episode is terminated when Zpoay < 0.8 or when Zpody > 2.0. Full Humanoid: This is a humanoid model with 19 rigid links and 28 actuated joints. It has more degrees of freedom below the knees and elbows, which makes the system higher-dimensional and harder for learning. The 142-dim observation includes the joint angles, joint velocities, vector of contact forces, and the coordinates of the center of mass. The reward and termination condition is the same as in the Simple Humanoid model. # 1.3. Partially Observable Tasks Limited Sensors: The full description is included in the main text.
1604.06778#74
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
75
# 1.3. Partially Observable Tasks Limited Sensors: The full description is included in the main text. Noisy Observations and Delayed Actions: For all tasks, we use a Gaussan noise with σ = 0.1. The time delay is as follows: Cart-Pole Balancing 0.15 sec, Cart-Pole Swing Up 0.15 sec, Mountain Car 0.15 sec, Acrobot Swing Up 0.06 sec, and Double Inverted Pendulum Balancing 0.06 sec. This corresponds to 3 discretization frames for each task. System Identifications: For Cart-Pole Balancing and Cart-Pole Swing Up, the pole length is varied uniformly between, 50% and 150%. For Mountain Car, the width of the valley varies uniformly between 75% and 125%. For Acrobot Swing Up, each of the pole length varies uniformly between 50% and 150%. For Double Inverted Pendulum Balancing, each of the pole length varies uniformly between 83% and 167%. Please refer to the benchmark source code for reference values. # 1.4. Hierarchical Tasks Locomotion + Food Collection: During each episode, 8 food units and 8 bombs are placed in the environment. Collecting a food unit gives +1 reward, and collecting a bomb gives −1 reward. Hence the best cumulative reward for a given episode is 8.
1604.06778#75
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
76
Locomotion + Maze: During each episode, a +1 reward is given when the robot reaches the goal. Otherwise, the robot receives a zero reward throughout the episode. Benchmarking Deep Reinforcement Learning for Continuous Control # 2. Experiment Parameters For all batch gradient-based algorithms, we use the same time-varying feature encoding for the linear baseline: $s, = concat(s, s © s,0.01¢, (0.014), (0.01t)*, 1) where s is the state vector and © represents element-wise product. Table 2 shows the experiment parameters for all four categories. We will then detail the hyperparameter search range for the selected tasks and report best hyperparameters, shown in Tables 3, 4, 5, 6, 7, and 8. Table 2. Experiment Setup Basic & Locomotion Partially Observable Hierarchical 50,000 0.99 500 500 50,000 0.99 100 300 50,000 0.99 500 500 Table 3. Learning Rate α for REINFORCE
1604.06778#76
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
77
Table 3. Learning Rate α for REINFORCE Search Range Best [1 × 10−4, 1 × 10−1] Cart-Pole Swing Up Double Inverted Pendulum [1 × 10−4, 1 × 10−1] [1 × 10−4, 1 × 10−1] Swimmer [1 × 10−4, 1 × 10−1] Ant 5 × 10−3 5 × 10−3 1 × 10−2 5 × 10−3 Table 4. Step Size δKL for TNPG Search Range Best [1 × 10−3, 5 × 100] Cart-Pole Swing Up Double Inverted Pendulum [1 × 10−3, 5 × 100] [1 × 10−3, 5 × 100] Swimmer [1 × 10−3, 5 × 100] Ant 5 × 10−2 3 × 10−2 1 × 10−1 3 × 10−1 Table 5. Step Size δKL for TRPO Search Range Best [1 × 10−3, 5 × 100] Cart-Pole Swing Up Double Inverted Pendulum [1 × 10−3, 5 × 100] [1 × 10−3, 5 × 100] Swimmer [1 × 10−3, 5 × 100] Ant 5 × 10−2 1 × 10−3 5 × 10−2 8 × 10−2 # Table 6. Step Size δKL for REPS
1604.06778#77
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06778
78
# Table 6. Step Size δKL for REPS Search Range Best [1 × 10−3, 5 × 100] Cart-Pole Swing Up Double Inverted Pendulum [1 × 10−3, 5 × 100] [1 × 10−3, 5 × 100] Swimmer [1 × 10−3, 5 × 100] Ant 1 × 10−2 8 × 10−1 3 × 10−1 8 × 10−1 Benchmarking Deep Reinforcement Learning for Continuous Control Table 7. Initial Extra Noise for CEM Search Range Best [1 × 10−3, 1] Cart-Pole Swing Up Double Inverted Pendulum [1 × 10−3, 1] [1 × 10−3, 1] Swimmer [1 × 10−3, 1] Ant 1 × 10−2 1 × 10−1 1 × 10−1 1 × 10−1 Table 8. Initial Standard Deviation for CMA-ES Search Range Best [1 × 10−3, 1 × 103] Cart-Pole Swing Up Double Inverted Pendulum [1 × 10−3, 1 × 103] [1 × 10−3, 1 × 103] Swimmer [1 × 10−3, 1 × 103] Ant 1 × 103 3 × 10−1 1 × 10−1 1 × 10−1
1604.06778#78
Benchmarking Deep Reinforcement Learning for Continuous Control
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
http://arxiv.org/pdf/1604.06778
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
cs.LG, cs.AI, cs.RO
14 pages, ICML 2016
null
cs.LG
20160422
20160527
[ { "id": "1506.02438" }, { "id": "1512.04455" }, { "id": "1504.00702" }, { "id": "1509.02971" } ]
1604.06174
1
1 Unveristy of Washington 2 Dato. Inc 3 Massachusetts Institute of Technology # Abstract We propose a systematic approach to reduce the memory consumption of deep neural net- work training. Specifically, we design an algorithm that costs O( n) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gra- dients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences. # 1 Introduction
1604.06174#1
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
2
# 1 Introduction In this paper, we propose a systematic approach to reduce the memory consumption of deep neural network training. We mainly focus on reducing the memory cost to store intermediate results (feature maps) and gradients, as the size of the parameters are relatively small comparing to the size of the intermediate feature maps in many common deep architectures. We use a computation graph analysis to do automatic in-place operation and memory sharing optimizations. More importantly, we propose a novel method to trade computation for memory. As a result, we give a practical algorithm that cost O( n) memory for feature maps to train a n layer network with only double the forward pass computational cost. Interestingly, we also show that in the extreme case, it is possible to use as little as O(log n) memory for the features maps to train a n layer network.
1604.06174#2
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
3
We have recently witnessed the success of deep neural networks in many domains [8], such as computer vision, speech recognition, natural language processing and reinforcement learning. Many of the success are brought by innovations in new architectures of deep neural networks. Convolu- tional neural networks [15, 14, 13, 10] model the spatial patterns and give the state of art results in computer vision tasks. Recurrent neural networks, such as long short-term memory [12], show inspiring results in sequence modeling and structure prediction. One common trend in those new models is to use deeper architectures [18, 14, 13, 10] to capture the complex patterns in a large amount of training data. Since the cost of storing feature maps and their gradients scales linearly with the depth of network, our capability of exploring deeper models is limited by the device (usu- ally a GPU) memory. For example, we already run out of memories in one of the current state-of-art models as described in [11]. In the long run, an ideal machine learning system should be able to continuously learn from an increasing amount of training data. Since the optimal model size and complexity often grows with more training data, it is very important to have memory-efficient train- ing algorithms. 1
1604.06174#3
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
4
1 Reducing memory consumption not only allows us to train bigger models. It also enables larger batch size for better device utilization and stablity of batchwise operators such as batch normaliza- tion [13]. For memory limited devices, it helps improve memory locality and potentially leads to better memory access patterns. It also enables us to switch from model parallelism to data paral- lelism for training deep convolutional neural networks, which can be beneficial in certain circum- stances. Our solution enables us to train deeper convolutional neural networks, as well as recurrent neural networks with longer unrolling steps. We provide guidelines for deep learning frameworks to incorporate the memory optimization techniques proposed in this paper. We will also make our implementation of memory optimization algorithm publicly available. # 2 Related Works
1604.06174#4
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
5
# 2 Related Works We can trace the idea of computational graph and liveness analysis back to the literatures of compiler optimizations [3]. Analogy between optimizing a computer program and optimizing a deep neural network computational graph can be found. For example, memory allocation in deep networks is similar to register allocation in a compiler. The formal analysis of computational graph allows us save memory in a principled way. Theano [5, 4] is a pioneering framework to bring the computation graph to deep learning, which is joined by recently introduced frameworks such as CNTK [2], Tensorflow [1] and MXNet [6]. Theano and Tensorflow use reference count based recycling and runtime garbage collection to manage memory during training, while MXNet uses a static memory allocation strategy prior to the actual computation. However, most of the existing framework focus on graph analysis to optimize computation after the gradient graph is constructed, but do not discuss the computation and memory trade-off.
1604.06174#5
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
6
The trade-off between memory and computation has been a long standing topic in systems re- search. Although not widely known, the idea of dropping intermediate results is also known as gradient checkpointing technique in automatic differentiation literature [9]. We bring this idea to neural network gradient graph construction for general deep neural networks. Through the discus- sion with our colleagues [19], we know that the idea of dropping computation has been applied in some limited specific use-cases. In this paper, we propose a general methodology that works for general deep neural networks, including both convolutional and recurrent neural networks. Our re- sults show that it is possible to train a general deep neural network with sublinear memory cost. More importantly, we propose an automatic planning algorithm to provide a good memory plan for real use-cases. The proposed gradient graph optimization algorithm can be readily combined with all the existing memory optimizations in the computational graph to further reduce the memory consumption of deep learning frameworks. There are other ways to train big models, such as swapping of CPU/GPU memory and use of model parallel training [7, 16]. These are orthogonal approaches and can be used together with our algorithm to train even bigger models with fewer resources. Moreover, our algorithm does not need additional communication over PCI-E and can save the bandwidth for model/data parallel training. # 3 Memory Optimization with Computation Graph
1604.06174#6
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
7
# 3 Memory Optimization with Computation Graph We start by reviewing the concept of computation graph and the memory optimization techniques. Some of these techniques are already used by existing frameworks such as Theano [5, 4], Tensor- flow [1] and MXNet [6]. A computation graph consists of operational nodes and edges that represent the dependencies between the operations. Fig. 1 gives an example of the computation graph of a two-layer fully connected neural network. Here we use coarse grained forward and backward op- erations to make the graph simpler. We further simplify the graph by hiding the weight nodes and gradients of the weights. A computation graph used in practice can be more complicated and con- tains mixture of fine/coarse grained operations. The analysis presented in this paper can be directly used in those more general cases. Once the network configuration (forward graph) is given, we can construct the corresponding backward pathway for gradient calculation. A backward pathway can be constructed by traversing 2
1604.06174#7
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
8
Once the network configuration (forward graph) is given, we can construct the corresponding backward pathway for gradient calculation. A backward pathway can be constructed by traversing 2 Configuration Gradient Calculation Graph A Possible Allocation Plan input input input-grad input input-grad inplace Sharing, T we f operation sharing {ulle-forward ‘ulefonward fulle-backward , fulle-forward fullo-backward sigmoid-forward sigmoid-forward 4 sigmoid-backward sigttold-forward sigmoid-backward i | Hi fulle-forward fulle-forward Pt fullo-backward fulle-forward fulle-backward softmax-forward ! softmax-forward 4 softmax-backward softmax-forward softmax-backward i t log-loss j«————_] label log-loss F «4 ]] label data dependency [J Memory allocation for each output of op, same color indicates shared memory. # Network ——
1604.06174#8
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
9
# Network —— Figure 1: Computation graph and possible memory allocation plan of a two layer fully connected neural network training procedure. Each node represents an operation and each edge represents a dependency between the operations. The nodes with the same color share the memory to store output or back-propagated gradient in each operator. To make the graph more clearly, we omit the weights and their output gradient nodes from the graph and assume that the gradient of weights are also calculated during backward operations. We also annotate two places where the in-place and sharing strategies are used. the configuration in reverse topological order, and apply the backward operators as in normal back- propagation algorithm. The backward pathway in Fig. 1 represents the gradient calculation steps explicitly, so that the gradient calculation step in training is simplified to just a forward pass on the entire computation graph (including the gradient calculation pathway). Explicit gradient path also offers some other benefits (e.g. being able to calculate higher order gradients), which is beyond our scope and will not be covered in this paper.
1604.06174#9
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
10
When training a deep convolutional/recurrent network, a great proportion of the memory is usu- ally used to store the intermediate outputs and gradients. Each of these intermediate results corre- sponds to a node in the graph. A smart allocation algorithm is able to assign the least amount of memory to these nodes by sharing memory when possible. Fig. 1 shows a possible allocation plan of the example two-layer neural network. Two types of memory optimizations can be used • Inplace operation: Directly store the output values to memory of a input value. • Memory sharing: Memory used by intermediate results that are no longer needed can be recycled and used in another node.
1604.06174#10
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
11
Allocation plan in Fig. 1 contains examples of both cases. The first sigmoid transformation is carried out using inplace operation to save memory, which is then reused by its backward operation. The storage of the softmax gradient is shared with the gradient by the first fully connected layer. Ad hoc application of these optimizations can leads to errors. For example, if the input of an operation is still needed by another operation, applying inplace operation on the input will lead to a wrong result. We can only share memory between the nodes whose lifetime do not overlap. There are multiple ways to solve this problem. One option is to construct the conflicting graph of with each variable as node and edges between variables with overlapping lifespan and then run a graph-coloring al- gorithm. This will cost O(n2) computation time. We adopt a simpler heuristic with only O(n) time. The algorithm is demonstrated in Fig. 2. It traverses the graph in topological order, and uses a counter to indicate the liveness of each record. An inplace operation can happen when there is no other pending operations that depend on its input. Memory sharing happens when a recycled tag is used by another node. This can also serve
1604.06174#11
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
12
happen when there is no other pending operations that depend on its input. Memory sharing happens when a recycled tag is used by another node. This can also serve as a dynamic runtime algorithm that traverses the graph, and use a garbage collector to recycle the outdated memory. We use this as a static memory allocation algorithm, to allocate the memory to each node before the execution starts, in order to avoid the overhead of garbage collection during runtime. Guidelines for Deep Learning Frameworks As we can see from the algorithm demonstration graph in Fig. 2. The data dependency causes longer lifespan of each output and increases the memory
1604.06174#12
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
13
3 B= A a ° I bo 2 apt ° sigmoid(A) 9 2 a 2 a KgmoldtA) Ee = Pooling(B) i ot ta a MU oy, T1 ear v1 1 () () Cesigmoista) BL Ty . H 4 ' ‘ ; i + : a é bey od a E=Pooling(c) S---- 2 a f 1 1 Grete 1 1 14 1 1 C) Initial state of step 1: Allocate step 2: Allocate tag step 3: Allocate tag step 4: Reuse the tag step 5: Re-use tag of E, allocation algorithm tag for B for C, cannot do forF, release space in the box for E This is an inplace inplace because B is ofB optimization : siil alive Final Memory Plan GH internal arrays, same color indicates shared Tag used to indicate memory sharing tT memory. a allocation Algorithm. count ef counter on dependent operations that, yetto be fullfilled Box of free tags in allocation algorithm. > data dependency, operation completed _---» data dependency, operation not completed
1604.06174#13
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
14
Figure 2: Memory allocation algorithm on computation graph. Each node associated with a liveness counter to count on operations to be full-filled. A temporal tag is used to indicate memory sharing. Inplace operation can be carried out when the current operations is the only one left (input of counter equals 1). The tag of a node can be recycled when the node’s counter goes to zero. consumption of big network. It is important for deep learning frameworks to • Declare the dependency requirements of gradient operators in minimum manner. • Apply liveness analysis on the dependency information and enable memory sharing. It is important to declare minimum dependencies. For example, the allocation plan in Fig. 1 won’t be possible if sigmoid-backward also depend on the output of the first fullc-forward. The dependency analysis can usually reduce the memory footprint of deep network prediction of a n layer network from O(n) to nearly O(1) because sharing can be done between each intermediate results. The technique also helps to reduce the memory footprint of training, although only up to a constant factor. # 4 Trade Computation for Memory # 4.1 General Methodology
1604.06174#14
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
15
# 4 Trade Computation for Memory # 4.1 General Methodology The techniques introduced in Sec. 3 can reduce the memory footprint for both training and prediction of deep neural networks. However, due to the fact that most gradient operators will depend on the intermediate results of the forward pass, we still need O(n) memory for intermediate results to train a n layer convolutional network or a recurrent neural networks with a sequence of length n. In order to further reduce the memory, we propose to drop some of the intermediate results, and recover them from an extra forward computation when needed. More specifically, during the backpropagation phase, we can re-compute the dropped intermedi- ate results by running forward from the closest recorded results. To present the idea more clearly, we show a simplified algorithm for a linear chain feed-forward neural network in Alg. 1. Specifically, the neural network is divided into several segments. The algorithm only remembers the output of each segment and drops all the intermediate results within each segment. The dropped results are recomputed at the segment level during back-propagation. As a result, we only need to pay the mem- ory cost to store the outputs of each segment plus the maximum memory cost to do backpropagation on each segment.
1604.06174#15
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
16
Alg. 1 can also be generalized to common computation graphs as long as we can divide the graph into segments. However, there are two drawbacks on directly applying Alg. 1: 1) users have to manually divide the graph and write customized training loop; 2) we cannot benefit from other memory optimizations presented in Sec 3. We solve this problem by introducing a general gradient graph construction algorithm that uses essentially the same idea. The algorithm is given in Alg. 2. In this algorithm, the user specify a function m : V → N on the nodes of a computation graph 4 # on Algorithm 1: Backpropagation with Data Dropping in a Linear Chain Network
1604.06174#16
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
17
4 # on Algorithm 1: Backpropagation with Data Dropping in a Linear Chain Network v ← input for k = 1 to length(segments) do temp[k] ← v for i = segments[k].begin to segments[k].end − 1 do v ← layer[i].f orward(v) end end g ← gradient(v, label) for k = length(segments) to 1 do v ← temp[k] localtemp ← empty hashtable for i = segments[k].begin to segments[k].end − 1 do localtemp[i] ← v v ← layer[i].f orward(v) end for i = segments[k].end − 1 to segments[k].begin do g ← layer[i].backward(g, localtemp[i]) end end
1604.06174#17
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
18
to indicate how many times a result can be recomputed. We call m the mirror count function as the re-computation is essentially duplicating (mirroring) the nodes. When all the mirror counts are set to 0, the algorithm degenerates to normal gradient graph. To specify re-computation pattern in Alg. 2, the user only needs to set the m(v) = 1 for nodes within each segment and m(v) = 0 for the output node of each segment. The mirror count can also be larger than 1, which leads to a recursive generalization to be discussed in Sec 4.4. Fig. 3 shows an example of memory optimized gradient graph. Importantly, Alg. 2 also outputs a traversal order for the computation, so the memory usage can be optimized. Moreover, this traversal order can help introduce control flow dependencies for frameworks that depend on runtime allocation. # 4.2 Drop the Results of Low Cost Operations
1604.06174#18
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
19
# 4.2 Drop the Results of Low Cost Operations One quick application of the general methodology is to drop the results of low cost operations and keep the results that are time consuming to compute. This is usually useful in a Conv-BatchNorm-Activation pipeline in convolutional neural networks. We can always keep the result of convolution, but drop the result of the batch normalization, activation function and pooling. In practice this will translate to a memory saving with little computation overhead, as the computation for both batch normalization and activation functions are cheap. √ # 4.3 An O( n) Memory Cost Algorithm Alg. 2 provides a general way to trade computation for memory. It remains to ask which intermediate result we should keep and which ones to re-compute. Assume we divide the n network into k segments the memory cost to train this network is given as follows. cost-total = max cost-of-segment(é) +O(k) =O (<) + O(k) (1)
1604.06174#19
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
20
cost-total = max cost-of-segment(é) +O(k) =O (<) + O(k) (1) The first part of the equation is the memory cost to run back-propagation on each of the segment. Given that the segment is equally divided, this translates into O(n/k) cost. The second part of n, we get equation is the cost to store the intermediate outputs between segments. Setting k = n). This algorithm only requires an additional forward pass during training, but the cost of O(2 5
1604.06174#20
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
21
5 Network Normal Memory Optimized Configuration Gradient Graph Gradient Graph input input input-grad input input-grad i conv-forward ‘sonw-forward tN conv-backward cony-forward it~ bneforward ! br-forward bn-backward bn forward a. relu-forward " relu-forward relu-backward —_relu-forward conv-forward conv-forward conv-backward conv-forward > bn-forward bn-forward bn-backward bn-forward conv-backward bn-backward relu-backward conv-backward bn-backward relu-forward relu-forward relu-backward _relu-forward --- > relu-backward ——* data dependency ----» control dependency [5] Memory allocation for each output of op, same color indicates shared # memory. Figure 3: Memory optimized gradient graph generation example. The forward path is mirrored to represent the re-computation happened at gradient calculation. User specifies the mirror factor to control whether a result should be dropped or kept. # Algorithm 2: Memory Optimized Gradient Graph Construction Input: G = (V, pred), input computation graph, the pred[v] gives the predecessors array of
1604.06174#21
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]
1604.06174
22
# Algorithm 2: Memory Optimized Gradient Graph Construction Input: G = (V, pred), input computation graph, the pred[v] gives the predecessors array of node v. Input: gradient(succ_grads, output, inputs), symbolic gradient function that creates a gradient node given successor gradients and output and inputs Input: m : V + Nt, m(v) gives how many time node v should be duplicated, m(v) = 0 means do no drop output of node v. alu] + v forv EV for k = 1 to max,cy m(v) do for v in topological-order(V) do if k < m(v) then a{v] < new node, same operator as v pred{a[v]] — U, cpredjaj{ale} end end end V’ & topological-order(V) for v in reverse-topological-order(V) do giv] <— gradient(|g{v] for v in successor(v)], alu], [a{v] for v in pred{u]]) V' & append(V’, topological-order(acenstors(g[v])) — V’) end Output: G’ = (V’, pred) the new graph, the order in V’ gives the logical execution order.
1604.06174#22
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O(sqrt(n)) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory - giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
http://arxiv.org/pdf/1604.06174
Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
cs.LG
null
null
cs.LG
20160421
20160422
[ { "id": "1512.03385" }, { "id": "1507.06228" }, { "id": "1603.05027" }, { "id": "1510.08983" }, { "id": "1602.08124" } ]