doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.03530 | 25 | In summary, our observations on both explicit and implicit regularizers are consistently suggesting that regularizers, when properly tuned, could help to improve the generalization performance. How- ever, it is unlikely that the regularizers are the fundamental reason for generalization, as the networks continue to perform well after all the regularizers removed.
# 4 FINITE-SAMPLE EXPRESSIVITY
Much effort has gone into characterizing the expressivity of neural networks, e.g, Cybenko (1989); Mhaskar (1993); Delalleau & Bengio (2011); Mhaskar & Poggio (2016); Eldan & Shamir (2016); Telgarsky (2016); Cohen & Shashua (2016). Almost all of these results are at the âpopulation levelâ showing what functions of the entire domain can and cannot be represented by certain classes of neural networks with the same number of parameters. For example, it is known that at the population level depth k is generically more powerful than depth k â 1.
We argue that what is more relevant in practice is the expressive power of neural networks on a finite sample of size n. It is possible to transfer population level results to finite sample results using uniform convergence theorems. However, such uniform convergence bounds would require the sample size to be polynomially large in the dimension of the input and exponential in the depth of the network, posing a clearly unrealistic requirement in practice. | 1611.03530#25 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 26 | We instead directly analyze the finite-sample expressivity of neural networks, noting that this dra- matically simplifies the picture. Specifically, as soon as the number of parameters p of a networks is greater than n, even simple two-layer neural networks can represent any function of the input sam- ple. We say that a neural network C can represent any function of a sample of size n in d dimensions if for every sample S C R¢ with |S] = n and every function f: S > R, there exists a setting of the weights of Câ such that C(x) = f(x) for every x ⬠S.
Theorem 1. There exists a two-layer neural network with ReLU activations and 2n +d weights that can represent any function on a sample of size n in d dimensions.
The proof is given in Section C in the appendix, where we also discuss how to achieve width O(n/k) with depth k. We remark that itâs a simple exercise to give bounds on the weights of the coefficient vectors in our construction. Lemma | gives a bound on the smallest eigenvalue of the matrix A. This can be used to give reasonable bounds on the weight of the solution w.
# 5 IMPLICIT REGULARIZATION: AN APPEAL TO LINEAR MODELS | 1611.03530#26 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 27 | # 5 IMPLICIT REGULARIZATION: AN APPEAL TO LINEAR MODELS
Although deep neural nets remain mysterious for many reasons, we note in this section that it is not necessarily easy to understand the source of generalization for linear models either. Indeed, it is useful to appeal to the simple case of linear models to see if there are parallel insights that can help us better understand neural networks.
Suppose we collect n distinct data points {(2;, y;)} where x; are d-dimensional feature vectors and y; are labels. Letting loss denote a nonnegative loss function with loss(y, y) = 0, consider the empirical risk minimization (ERM) problem
min,,eRe ¢ Dojâ1 loss(wT xi, yi) (2)
If d > n, then we can fit any labeling. But is it then possible to generalize with such a rich model class and no explicit regularization?
Let X denote the n x d data matrix whose i-th row is ai. If X has rank n, then the system of equations Xw = y has an infinite number of solutions regardless of the right hand side. We can find a global minimum in the ERM problem (2) by simply solving this linear system. | 1611.03530#27 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 28 | But do all global minima generalize equally well? Is there a way to determine when one global minimum will generalize whereas another will not? One popular way to understand quality of
minima is the curvature of the loss function at the solution. But in the linear case, the curvature of all optimal solutions is the same (Choromanska et al., 2015). To see this, note that in the case when y; is a scalar,
Vi 2=ui Vet TL, loss(wl x;, y:) = ix? diag(8)X, (0 = HF toss(z.vi)
A similar formula can be found when y is vector valued. In particular, the Hessian is not a function of the choice of w. Moreover, the Hessian is degenerate at all global optimal solutions. | 1611.03530#28 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 29 | If curvature doesnât distinguish global minima, what does? A promising direction is to consider the workhorse algorithm, stochastic gradient descent (SGD), and inspect which solution SGD converges to. Since the SGD update takes the form w;+1 = wy â meerxi, where 7 is the step size and e; is the prediction error loss. If wo = 0, we must have that the solution has the form w = ean Ak; for some coefficients a. Hence, if we run SGD we have that w = Xa lies in the span of the data points. If we also perfectly interpolate the labels we have Xw = y. Enforcing both of these identities, this reduces to the single equation
XXTa=y (3)
which has a unique solution. Note that this equation only depends on the dot-products between the data points x;. We have thus derived the âkernel trickâ (Schélkopf et al., 2001)âalbeit in a roundabout fashion.
We can therefore perfectly fit any set of labels by forming the Gram matrix (aka the kernel matrix) on the data K = X XT and solving the linear system Ka = y for a. This is ann x n linear system that can be solved on standard workstations whenever 7 is less than a hundred thousand, as is the case for small benchmarks like CIFAR10 and MNIST. | 1611.03530#29 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 30 | Quite surprisingly, fitting the training labels exactly yields excellent performance for convex models. On MNIST with no preprocessing, we are able to achieve a test error of 1.2% by simply solving (3). Note that this is not exactly simple as the kernel matrix requires 30GB to store in memory. Nonethe- less, this system can be solved in under 3 minutes in on a commodity workstation with 24 cores and 256 GB of RAM with a conventional LAPACK call. By first applying a Gabor wavelet transform to the data and then solving (3), the error on MNIST drops to 0.6%. Surprisingly, adding regularization does not improve either modelâs performance!
Similar results follow for CIFAR10. Simply applying a Gaussian kernel on pixels and using no regularization achieves 46% test error. By preprocessing with a random convolutional neural net with 32,000 random filters, this test error drops to 17% error. Adding 5 regularization further reduces this number to 15% error. Note that this is without any data augmentation. | 1611.03530#30 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 31 | Note that this kernel solution has an appealing interpretation in terms of implicit regularization. Simple algebra reveals that it is equivalent to the minimum ¢2-norm solution of Xw = y. That is, out of all models that exactly fit the data, SGD will often converge to the solution with minimum norm. It is very easy to construct solutions of Xw = y that donât generalize: for example, one could fit a Gaussian kernel to data and place the centers at random points. Another simple example would be to force the data to fit random labels on the test data. In both cases, the norm of the solution is significantly larger than the minimum norm solution.
Unfortunately, this notion of minimum norm is not predictive of generalization performance. For example, returning to the MNIST example, the @2-norm of the minimum norm solution with no preprocessing is approximately 220. With wavelet preprocessing, the norm jumps to 390. Yet the test error drops by a factor of 2. So while this minimum-norm intuition may provide some guidance to new algorithm design, it is only a very small piece of the generalization story.
# 6 CONCLUSION
In this work we presented a simple experimental framework for defining and understanding a notion of effective capacity of machine learning models. The experiments we conducted emphasize that the effective capacity of several successful neural network architectures is large enough to shatter the | 1611.03530#31 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 32 | >This conv-net is the Coates & Ng (2012) net, but with the filters selected at random instead of with k-means.
training data. Consequently, these models are in principle rich enough to memorize the training data. This situation poses a conceptual challenge to statistical learning theory as traditional measures of model complexity struggle to explain the generalization ability of large artificial neural networks. We argue that we have yet to discover a precise formal measure under which these enormous models are simple. Another insight resulting from our experiments is that optimization continues to be empirically easy even if the resulting model does not generalize. This shows that the reasons for why optimization is empirically easy must be different from the true cause of generalization.
# REFERENCES | 1611.03530#32 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 33 | # REFERENCES
Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Watten- berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http: //tensorflow.org/. Software available from ten- sorflow.org.
Peter L Bartlett. The Sample Complexity of Pattern Classification with Neural Networks - The Size of the Weights is More Important than the Size of the Network. JEEE Trans. Information Theory, 1998. | 1611.03530#33 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 34 | Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: risk bounds and structural results. Journal of Machine Learning Research, 3:463-482, March 2003.
Olivier Bousquet and André Elisseeff. Stability and generalization. Journal of Machine Learning Research, 2:499-526, March 2002.
Anna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. In AISTATS, 2015.
Adam Coates and Andrew Y. Ng. Learning feature representations with k-means. In Neural Net- works: Tricks of the Trade, Reloaded. Springer, 2012.
Nadav Cohen and Amnon Shashua. Convolutional Rectifier Networks as Generalized Tensor De- compositions. In JCML, 2016.
G Cybenko. Approximation by superposition of sigmoidal functions. Mathematics of Control, Signals and Systems, 2(4):303-314, 1989.
Olivier Delalleau and Yoshua Bengio. Shallow vs. Deep Sum-Product Networks. In Advances in Neural Information Processing Systems, 2011.
E. Edgington and P. Onghena. Randomization Tests. Statistics: A Series of Textbooks and Mono- graphs. Taylor & Francis, 2007. ISBN 9781584885894. | 1611.03530#34 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 35 | Ronen Eldan and Ohad Shamir. The Power of Depth for Feedforward Neural Networks. In COLT, 2016.
Moritz Hardt, Benjamin Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. In JCML, 2016.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016.
Sergey loffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In JCML, 2015.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Tech- nical report, Department of Computer Science, University of Toronto, 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Con- volutional Neural Networks. In Advances in Neural Information Processing Systems, 2012.
Junhong Lin, Raffaello Camoriano, and Lorenzo Rosasco. Generalization Properties and Implicit Regularization for Multiple Passes SGM. In JCML, 2016.
Roi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational efficiency of training neural networks. In Advances in Neural Information Processing Systems, 2014. | 1611.03530#35 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 36 | Hrushikesh Mhaskar and Tomaso A. Poggio. Deep vs. shallow networks : An approximation theory perspective. CoRR, abs/1608.03287, 2016. URL http: //arxiv. org/abs/1608 . 03287.
Hrushikesh Narhar Mhaskar. Approximation properties of a multilayered feedforward artificial neu- ral network. Advances in Computational Mathematics, 1(1):61-80, 1993.
Sayan Mukherjee, Partha Niyogi, Tomaso Poggio, and Ryan Rifkin. Statistical learning: Stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk min- imization. Technical Report AI Memo 2002-024, Massachusetts Institute of Technology, 2002.
Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. CoRR, abs/1412.6614, 2014.
Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-Based Capacity Control in Neural Networks. In COLT, pp. 1376-1401, 2015. | 1611.03530#36 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 37 | Tomaso Poggio, Ryan Rifkin, Sayan Mukherjee, and Partha Niyogi. General conditions for predic- tivity in learning theory. Nature, 428(6981):419-422, 2004.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei- Fei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211-252, 2015. ISSN 1573-1405. doi: 10.1007/s11263-015-0816-y.
Bernhard Schélkopf, Ralf Herbrich, and Alex J Smola. A generalized representer theorem. In COLT, 2001.
Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. Learnability, stability and uniform convergence. Journal of Machine Learning Research, 11:2635â2670, October 2010. | 1611.03530#37 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 38 | Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958, 2014.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re- thinking the inception architecture for computer vision. In CVPR, pp. 2818-2826, 2016. doi: 10.1109/CVPR.2016.308.
Matus Telgarsky. Benefits of depth in neural networks. In COLT, 2016.
Vladimir N. Vapnik. Statistical Learning Theory. Adaptive and learning systems for signal process- ing, communications, and control. Wiley, 1998.
Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. On early stopping in gradient descent learn- ing. Constructive Approximation, 26(2):289-315, 2007. | 1611.03530#38 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 39 | Conv Module Inception Module Downsample Module Ch1 + Ch3 filters Ch3 filters Convolution Conv Module Conv Module c K filter ch1 filter Cha, S strides 1x1 stride: 1x1 strides 23 fitters Downsample Module ie . Batch Norm Merge Merge Concat in channel Concat in channels Activation eLU Inception Module 112 + 48 filters Inception (Small) Inception Module Inception Module Z Sines 3 filters Inception Module 176 + 160 filters 96 + 64 filters . Inception Module Images Inception Module Inception Module ve iae nten 28x28x3 inputs 2. + 48 filters 30 + 80 filters Conv Module Inception Module Mean Pooling 1x1 strides Downsample Module Fully Connected 96 filter 10-way outputs
Figure 3: The small Inception model adapted for the CIFAR10 dataset. On the left we show the Conv module, the Inception module and the Downsample module, which are used to construct the Inception architecture on the right.
# A EXPERIMENTAL SETUP
We focus on two image classification datasets, the CIFAR10 dataset (Krizhevsky & Hinton, 2009) and the ImageNet (Russakovsky et al., 2015) ILSVRC 2012 dataset. | 1611.03530#39 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 40 | The CIFAR1O dataset contains 50,000 training and 10,000 validation images, split into 10 classes. Each image is of size 32x32, with 3 color channels. We divide the pixel values by 255 to scale them into [0,1], crop from the center to get 28x28 inputs, and then normalize them by subtract- ing the mean and dividing the adjusted standard deviation independently for each image with the per_image_whitening function in TENSORFLOW (Abadi et al., 2015).
For the experiment on CIFAR1O, we test a simplified Inception (Szegedy et al., 2016) and Alexnet (Krizhevsky et al., 2012) by adapting the architectures to smaller input image sizes. We also test standard multi-layer perceptrons (MLPs) with various number of hidden layers. | 1611.03530#40 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 41 | The small Inception model uses a combination of 1x1 and 3x3 convolution pathways. The detailed architecture is illustrated in Figure 3. The small Alexnet is constructed by two (convolution 5x5 â max-pool 3x3 â local-response-normalization) modules followed by two fully connected layers with 384 and 192 hidden units, respectively. Finally a 10-way linear layer is used for prediction. The MLPs use fully connected layers. MLP 1x512 means one hidden layer with 512 hidden units. All of the architectures use standard rectified linear activation functions (ReLU).
For all experiments on CIFAR10, we train using SGD with a momentum parameter of 0.9. An initial learning rate of 0.1 (for small Inception) or 0.01 (for small Alexnet and MLPs) are used, with a decay factor of 0.95 per training epoch. Unless otherwise specified, for the experiments with randomized labels or pixels, we train the networks without weight decay, dropout, or other forms of explicit regularization. Section 3 discusses the effects of various regularizers on fitting the networks and generalization. | 1611.03530#41 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 42 | The ImageNet dataset contains 1,281,167 training and 50,000 validation images, split into 1000 classes. Each image is resized to 299x299 with 3 color channels. In the experiment on ImageNet, we use the Inception V3 (Szegedy et al., 2016) architecture and reuse the data preprocessing and experimental setup from the TENSORFLOW package. The data pipeline is extended to allow dis- abling of data augmentation and feeding random labels that are consistent across epochs. We run the ImageNet experiment in a distributed asynchronized SGD system with 50 workers.
# B_ DETAILED RESULTS ON IMAGENET
Table 2: The top-1 and top-5 accuracy (in percentage) of the Inception v3 model on the ImageNet dataset. We compare the training and test accuracy with various regularization turned on and off, for both true labels and random labels. The original reported top-5 accuracy of the Alexnet on ILSVRC 2012 is also listed for reference. The numbers in parentheses are the best test accuracy during training, as a reference for potential performance gain of early stopping. | 1611.03530#42 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 43 | oe dropout Ney top-1 train top-5 train top-1 test top-5 test ImageNet 1000 classes with the original labels yes yes yes 92.18 99.21 77.84 93.92 yes no no 92.33 99.17 72.95 90.43 no no yes 90.60 100.0 67.18 (72.57) 86.44 (91.31) no no no 99.53 100.0 59.80 (63.16) 80.38 (84.49) Alexnet (Krizhevsky et al., 2012) - - - 83.6 ImageNet 1000 classes with random labels no yes yes 91.18 97.95 0.09 0.49 no no yes 87.81 96.15 0.12 0.50 no no no 95.20 99.14 0.11 0.56
Table 2 shows the performance on Imagenet with true labels and random labels, respectively.
# C PROOF OF THEOREM 1
Lemma 1. For any two interleaving sequences of n real numbers by < 11 < bp < %°++ < by < Lp, then x n matrix A = [max{x; â b;,0}]i; has full rank. Its smallest eigenvalue is min; x; â bj. | 1611.03530#43 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 44 | Proof. By its definition, the matrix A is lower triangular, that is, all entries with i < j vanish. A basic linear algebra fact states that a lower-triangular matrix has full rank if and only if all of the entries on the diagional are nonzero. Since, x; > b;, we have that max{x; â b;,0} > 0. Hence, A is invertible. The second claim follows directly from the fact that a lower-triangular matrix has all its eigenvalues on the main diagonal. This in turn follows from the first fact, since A â AJ can have lower rank only if A equals one of the diagonal values.
Proof of Theorem 1. For weight vectors w,b ⬠Râ and a ⬠R%, consider the function c: Râ > R,
e(z) = > w,; max{(a, x) â b;,0} j=l
It is easy to see that c can be expressed by a depth 2 network with ReLU activations.
Now, fix a sample S = {z1,..., Zn} of size n and a target vector y ⬠Râ. To prove the theorem, we need to find weights a, b, w so that y; = c(z;) for alli ⬠{1,...,n} | 1611.03530#44 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 45 | First, choose a and b such that with x; = (a, z;) we have the interleaving property b) < x1 < bo < +++ < by < Xp. This is possible since all z;âs are distinct. Next, consider the set of n equations in the n unknowns w,
yi=e(%), te {1,...,n}.
We have c(z;) = Aw, where A = [max{x; â b;,0}]i; is the matrix we encountered in Lemma 1. We chose a and b so that the lemma applies and hence A has full rank. We can now solve the linear system y = Aw to find suitable weights w. | 1611.03530#45 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 46 | While the construction in the previous proof has inevitably high width given that the depth is 2, it is possible to trade width for depth. The construction is as follows. With the notation from the proo: and assuming w.l.o.g. that 71,...,2n ⬠[0,1], partition the interval [0, 1] into b disjoint intervals I,,..., Ip so that each interval I; contains n/b points. At layer j, apply the construction from the proof to all points in ;. This requires O(n/b) nodes at level j. This construction results in a circuit of width O(n/b) and depth b + 1 which so far has b outputs (one from each layer). It remains to implement a multiplexer which selects one of the b outputs based on which interval a given input x falls into. This boils down to implementing one (approximate) indicator function f; for each interval I; and outputting Dye f;(x)o;, where 0; is the output of layer j. This results in a single output circuit. Implementing a single indicator function requires constant size and depth with ReLU activiations. Hence, the final size of the construction is O(n) and the depth is b+c for some constant c. Setting k = b â c gives the next corollary. | 1611.03530#46 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 47 | Corollary 1. For every k > 2, there exists neural network with ReLU activations of depth k, width O(n/k) and O(n + d) weights that can represent any function on a sample of size n in d dimensions.
# D_ RESULTS OF IMPLICIT REGULARIZATION FOR LINEAR MODELS
Table 3: Generalizing with kernels. The test error associated with solving the kernel equation (3) on small benchmarks. Note that changing the preprocessing can significantly change the resulting test error.
data set pre-processing _ test error MNIST none 1.2% MNIST gabor filters 0.6% CIFAR10O none 46% CIFARIO random cony-net 17%
Table 3 list the experiment results of linear models described in Section 5.
E_ FITTING RANDOM LABELS WITH EXPLICIT REGULARIZATION
In Section 3, we showed that it is difficult to say that commonly used explicit regularizers count as a fundamental phase change in the generalization capability of deep nets. In this appendix, we add some experiments to investigate how explicit regularizers affect the ability to fit random labels.
Table 4: Results on fitting random labels on the CIFAR10 dataset with weight decay and data aug- mentation. | 1611.03530#47 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.03530 | 48 | Table 4: Results on fitting random labels on the CIFAR10 dataset with weight decay and data aug- mentation.
Model Regularizer Training Accuracy Inception 100% Alexnet : Failed to converge MLP 3x512 Weight decay 100% MLP 1x512 99.21% : Random Cropping! 99.93% Inception Augmentation? 99.28%
From Table 4, we can see that for weight decay using the default coefficient for each model, except Alexnet, all other models are still able to fit random labels. We also tested random cropping and data augmentation with the Inception architecture. By changing the default weight decay factor from 0.95 to 0.999, and running for more epochs, we observe overfitting to random labels in both cases. It is expected to take longer to converge because data augmentation explodes the training set size (though many samples are not i.i.d. any more).
âIn random cropping and augmentation, a new randomly modified image is used in each epoch, but the (randomly assigned) labels are kept consistent for all the epochs. The âtraining accuracyâ means a slightly different thing here as the training set is different in each epoch. The global average of the online accuracy at each mini-batch on the augmented samples is reported here.
Data augmentation includes random left-right flipping and random rotation up to 25 degrees. | 1611.03530#48 | Understanding deep learning requires rethinking generalization | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | http://arxiv.org/pdf/1611.03530 | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals | cs.LG | Published in ICLR 2017 | null | cs.LG | 20161110 | 20170226 | [] |
1611.02779 | 0 | 6 1 0 2
v o N 0 1 ] I A . s c [
2 v 9 7 7 2 0 . 1 1 6 1 : v i X r a
Under review as a conference paper at ICLR 2017
# RL?: FAST REINFORCEMENT LEARNING VIA SLOW REINFORCEMENT LEARNING
Yan Duan/?, John Schulmanâ, Xi Chenâ*, Peter L. Bartlettâ, Ilya Sutskever', Pieter Abbeelât Â¥ UC Berkeley, Department of Electrical Engineering and Computer Science
OpenAI
{rocky, joschu, peter}@openai.com, [email protected], {ilyasu, pieter}@openai.com
# ABSTRACT | 1611.02779#0 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 1 | Deep reinforcement learning (deep RL) has been successful in learning sophis- ticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, bene- fiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a âfastâ reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL?, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose (âslowâ) RL algorithm. The RNN receives all information a typical RL algorithm would receive, including ob- servations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the âfastâ RL algorithm on the current (previously unseen) MDP. We evaluate RL? experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-armed ban- dit problems and finite | 1611.02779#1 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 2 | on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-armed ban- dit problems and finite MDPs. After RL? is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the large- scale side, we test RL? on a vision-based navigation task and show that it scales up to high-dimensional problems. | 1611.02779#2 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 3 | # 1 INTRODUCTION
In recent years, deep reinforcement learning has achieved many impressive results, including playing Atari games from raw pixels (Guo et al., 2014; Mnih et al., 2015; Schulman et al., 2015), and acquiring advanced manipulation and locomotion skills (Levine et al., 2016; Lillicrap et al., 2015; Watter et al., 2015; Heess et al., 2015; Schulman et al., 2015; 2016). However, many of the successes come at the expense of high sample complexity. For example, the state-of-the-art Atari results require tens of thousands of episodes of experience (Mnih et al., 2015) per game. To master a game, one would need to spend nearly 40 days playing it with no rest. In contrast, humans and animals are capable of learning a new task in a very small number of trials. Continuing the previous example, the human player in Mnih et al. (2015) only needed 2 hours of experience before mastering a game. We argue that the reason for this sharp contrast is largely due to the lack of a good prior, which results in these deep RL agents needing to rebuild their knowledge about the world from scratch. | 1611.02779#3 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 4 | Although Bayesian reinforcement learning provides a solid framework for incorporating prior knowledge into the learning process (Strens, 2000; Ghavamzadeh et al., 2015; Kolter & Ng, 2009), exact computation of the Bayesian update is intractable in all but the simplest cases. Thus, practi- cal reinforcement learning algorithms often incorporate a mixture of Bayesian and domain-specific ideas to bring down sample complexity and computational burden. Notable examples include guided policy search with unknown dynamics (Levine & Abbeel, 2014) and PILCO (Deisenroth & Ras- mussen, 2011). These methods can learn a task using a few minutes to a few hours of real experience, compared to days or even weeks required by previous methods (Schulman et al., 2015; 2016; Lilli- crap et al., 2015). However, these methods tend to make assumptions about the environment (e.g., instrumentation for access to the state at learning time), or become computationally intractable in high-dimensional settings (Wahlstrém et al., 2015). | 1611.02779#4 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 5 | Under review as a conference paper at ICLR 2017 Rather than hand-designing domain-specific reinforcement learning algorithms, we take a different approach in this paper: we view the learning process of the agent itself as an objective, which can be optimized using standard reinforcement learning algorithms. The objective is averaged across all possible MDPs according to a specific distribution, which reflects the prior that we would like to distill into the agent. We structure the agent as a recurrent neural network, which receives past rewards, actions, and termination flags as inputs in addition to the normally received observations. Furthermore, its internal state is preserved across episodes, so that it has the capacity to perform learning in its own hidden activations. The learned agent thus also acts as the learning algorithm, and can adapt to the task at hand when deployed. We evaluate this approach on two sets of classical problems, multi-armed bandits and tabular MDPs. These problems have been extensively studied, and there exist algorithms that achieve asymptoti- cally optimal performance. We demonstrate that our method, named RL?, can achieve performance comparable with these theoretically justified algorithms. Next, we evaluate RL? on a vision-based navigation | 1611.02779#5 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 6 | We demonstrate that our method, named RL?, can achieve performance comparable with these theoretically justified algorithms. Next, we evaluate RL? on a vision-based navigation task implemented using the ViZDoom environment (Kempka et al., 2016), showing that RL? can also scale to high-dimensional problems. 2 METHOD 2.1 PRELIMINARIES We define a discrete-time finite-horizon discounted Markov decision process (MDP) by a tuple M = (S,A,P,7r, po, 7,1), in which S is a state set, A an action set, P : S x Ax S + R, a transition probability distribution, r : S x A â [âRmax. Rmax] a bounded reward function, pp : S + Ry an initial state distribution, y ⬠(0, 1] a discount factor, and T the horizon. In policy search methods, we typically optimize a stochastic policy 7g : S x A â R, parametrized by 0. The objective is to maximize its expected discounted return, (79) = E, an 9'r(s¢, az)], where T = (89, a0, .--) denotes the whole trajectory, s9 ~ | 1611.02779#6 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 7 | (79) = E, an 9'r(s¢, az)], where T = (89, a0, .--) denotes the whole trajectory, s9 ~ po(80), a4 ~ To (az|Sz), and 5441 ~ P(se41|S¢, at). 2.2 FORMULATION We now describe our formulation, which casts learning an RL algorithm as a reinforcement learning problem, and hence the name RL?. We assume knowledge of a set of MDPs, denoted by M, and a distribution over them: pry : M â R ,. We only need to sample from this distribution. We use n. to denote the total number of episodes allowed to spend with a specific MDP. We define a trial to be such a series of episodes of interaction with a fixed MDP. --Agent o-- ho Trial 1 Trial 2 Figure 1: Procedure of agent-environment interaction This process of interaction between an agent and the environment is illustrated in Figure 1. Here, each trial happens to consist of two episodes, hence n = 2. For each trial, a separate MDP is drawn from py, and for each episode, a fresh so is drawn from the initial state distribution specific to the corresponding MDP. Upon receiving an action | 1611.02779#7 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 8 | separate MDP is drawn from py, and for each episode, a fresh so is drawn from the initial state distribution specific to the corresponding MDP. Upon receiving an action a; produced by the agent, the environment computes reward r;, steps forward, and computes the next state s;,,. If the episode has terminated, it sets termination flag d, to 1, which otherwise defaults to 0. Together, the next state s,41, action | 1611.02779#8 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 9 | Under review as a conference paper at ICLR 2017
az, reward r;, and termination flag d;, are concatenated to form the input to the policy', which, conditioned on the hidden state h;41, generates the next hidden state h;2 and action a;,,. At the end of an episode, the hidden state of the policy is preserved to the next episode, but not preserved between trials.
The objective under this formulation is to maximize the expected total discounted reward accumu- lated during a single trial rather than a single episode. Maximizing this objective is equivalent to minimizing the cumulative pseudo-regret (Bubeck & Cesa-Bianchi, 2012). Since the underlying MDP changes across trials, as long as different strategies are required for different MDPs, the agent must act differently according to its belief over which MDP it is currently in. Hence, the agent is forced to integrate all the information it has received, including past actions, rewards, and termi- nation flags, and adapt its strategy continually. Hence, we have set up an end-to-end optimization process, where the agent is encouraged to learn a âfastâ reinforcement learning algorithm. | 1611.02779#9 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 10 | For clarity of exposition, we have defined the âinnerâ problem (of which the agent sees n each trials) to be an MDP rather than a POMDP. However, the method can also be applied in the partially- observed setting without any conceptual changes. In the partially observed setting, the agent is faced with a sequence of POMDPs, and it receives an observation 0; instead of state s; at time t. The visual navigation experiment in Section 3.3, is actually an instance of the this POMDP setting.
2.3 POLICY REPRESENTATION
We represent the policy as a general recurrent neural network. Each timestep, it receives the tuple (s,a,7r,d) as input, which is embedded using a function ¢(s,a,7,d) and provided as input to an RNN. To alleviate the difficulty of training RNNs due to vanishing and exploding gradients (Bengio et al., 1994), we use Gated Recurrent Units (GRUs) (Cho et al., 2014) which have been demonstrated to have good empirical performance (Chung et al., 2014; Jozefowicz et al., 2015). The output of the GRU is fed to a fully connected layer followed by a softmax function, which forms the distribution over actions. | 1611.02779#10 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 11 | We have also experimented with alternative architectures which explicitly reset part of the hidden state each episode of the sampled MDP, but we did not find any improvement over the simple archi- tecture described above.
2.4 POLICY OPTIMIZATION
After formulating the task as a reinforcement learning problem, we can readily use standard off-the- shelf RL algorithms to optimize the policy. We use a first-order implementation of Trust Region Policy Optimization (TRPO) (Schulman et al., 2015), because of its excellent empirical perfor- mance, and because it does not require excessive hyperparameter tuning. For more details, we refer the reader to the original paper. To reduce variance in the stochastic gradient estimation, we use a baseline which is also represented as an RNN using GRUs as building blocks. We optionally apply Generalized Advantage Estimation (GAE) (Schulman et al., 2016) to further reduce the variance.
# 3 EVALUATION
We designed experiments to answer the following questions:
e Can RL? learn algorithms that achieve good performance on MDP classes with special structure, relative to existing algorithms tailored to this structure that have been proposed in the literature?
e Can RL? scale to high-dimensional tasks? | 1611.02779#11 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 12 | e Can RL? scale to high-dimensional tasks?
For the first question, we evaluate RL? on two sets of tasks, multi-armed bandits (MAB) and tabular MDPs. These problems have been studied extensively in the reinforcement learning literature, and this body of work includes algorithms with guarantees of asymptotic optimality. We demonstrate that our approach achieves comparable performance to these theoretically justified algorithms.
'To make sure that the inputs have a consistent dimension, we use placeholder values for the initial input to the policy.
Under review as a conference paper at ICLR 2017
For the second question, we evaluate RL? on a vision-based navigation task. Our experiments show that the learned policy makes effective use of the learned visual information and also short-term information acquired from previous episodes.
3.1 MULTI-ARMED BANDITS | 1611.02779#12 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 13 | 3.1 MULTI-ARMED BANDITS
Multi-armed bandit problems are a subset of MDPs where the agentâs environment is stateless. Specifically, there are k arms (actions), and at every time step, the agent pulls one of the arms, say i, and receives a reward drawn from an unknown distribution: our experiments take each arm to be a Bernoulli distribution with parameter p;. The goal is to maximize the total reward obtained over a fixed number of time steps. The key challenge is balancing exploration and exploitationâ âexploringâ each arm enough times to estimate its distribution (p;), but eventually switching over to âexploitationâ of the best arm. Despite the simplicity of multi-arm bandit problems, their study has led to a rich theory and a collection of algorithms with optimality guarantees.
Using RL?, we can train an RNN policy to solve bandit problems by training it on a given distribution pm. If the learning is successful, the resulting policy should be able to perform competitively with the theoretically optimal algorithms. We randomly generated bandit problems by sampling each parameter p; from the uniform distribution on [0, 1]. After training the RNN policy with RLâ, we compared it against the following strategies:
e Random: this is a baseline strategy, where the agent pulls a random arm each time. | 1611.02779#13 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 14 | e Random: this is a baseline strategy, where the agent pulls a random arm each time.
e Gittins index (Gittins, 1979): this method gives the Bayes optimal solution in the dis- counted infinite-horizon case, by computing an index separately for each arm, and taking the arm with the largest index. While this work shows it is sufficient to independently com- pute an index for each arm (hence avoiding combinatorial explosion with the number of arms), it doesnât show how to tractably compute these individual indices exactly. We fol- low the practical approximations described in Gittins et al. (2011), Chakravorty & Mahajan (2013), and Whittle (1982), and choose the best-performing approximation for each setup.
e UCBI (Auer, 2002): this method estimates an upper-confidence bound, and pulls the arm with the largest value of ucb;(t) = fi;(tâ1)+¢,/ Hep: where /i;(t â 1) is the estimated mean parameter for the ith arm, T;(¢â1) is the number of times the ith arm has been pulled, and c is a tunable hyperparameter (Audibert & Munos, 2011). We initialize the statistics with exactly one success and one failure, which corresponds to a Beta(1, 1) prior. | 1611.02779#14 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 15 | e Thompson sampling (TS) (Thompson, 1933): this is a simple method which, at each time step, samples a list of arm means from the posterior distribution, and choose the best arm according to this sample. It has been demonstrated to compare favorably to UCB1 empir- ically (Chapelle & Li, 2011). We also experiment with an optimistic variant (OTS) (May et al., 2012), which samples N times from the posterior, and takes the one with the highest probability.
e ¢«-Greedy: in this strategy, the agent chooses the arm with the best empirical mean with probability 1 â â¬, and chooses a random arm with probability «. We use the same initial- ization as UCB1.
e Greedy: this is a special case of e-Greedy with « = 0.
The Bayesian methods, Gittins index and Thompson sampling, take advantage of the distribution pm; and we provide these methods with the true distribution. For each method with hyperparame- ters, we maximize the score with a separate grid search for each of the experimental settings. The hyperparameters used for TRPO are shown in the appendix. | 1611.02779#15 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 16 | The results are summarized in Table 1. Learning curves for various settings are shown in Figure 2. We observe that our approach achieves performance that is almost as good as the the reference meth- ods, which were (human) designed specifically to perform well on multi-armed bandit problems. It is worth noting that the published algorithms are mostly designed to minimize asymptotic regret (rather than finite horizon regret), hence there tends to be a little bit of room to outperform them in the finite horizon settings.
Under review as a conference paper at ICLR 2017
Table 1: MAB Results. Each grid cell records the total reward averaged over 1000 different instances of the bandit problem. We consider k ⬠{5,10,50} bandits and n ⬠{10,100,500} episodes of interaction. We highlight the best-performing algorithms in each setup according to the computed mean, and we also highlight the other algorithms in that row whose performance is not significantly different from the best one (determined by a one-sided t-test with p = 0.05). | 1611.02779#16 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 17 | Setup Random _ Gittins TS OTS UCB1 «Greedy Greedy RL? n=10,k=5 5.0 6.6 5.7 6.5 6.7 6.6 6.6 6.7 n=10,k=10 5.0 6.6 5.5 6.2 6.7 6.6 6.6 6.7 n=10,k=50 5.1 6.5 5.2 5.5 6.6 6.5 6.5 6.8 n=100,k=5 49.9 78.3 74.7 77.9 78.0 75.4 74.8 78.7 n=100,k=10 49.9 82.8 76.7 81.4 82.4 77.4 77.1 83.5 n=100,k =50 49.8 85.2 64.5 67.7 84.3 78.3 78.0 84.9 n=500,k=5 249.8 405.8 402.0 406.7 405.8 388.2 380.6 401.6 n=500,k =10 249.0 437.8 429.5 438.9 437.1 408.0 395.0 432.5 n=500,k =50 249.6 463.7 427.2 437.6 457.6 413.6 402.8 438.9 Normalize | 1611.02779#17 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 19 | Figure 2: RL? learning curves for multi-armed bandits. Performance is normalized such that Gittins index scores 1, and random policy scores 0.
We observe that there is a noticeable gap between Gittins index and RL? in the most challenging scenario, with 50 arms and 500 episodes. This raises the question whether better architectures or better (slow) RL algorithms should be explored. To determine the bottleneck, we trained the same policy architecture using supervised learning, using the trajectories generated by the Gittins index approach as training data. We found that the learned policy, when executed in test domains, achieved the same level of performance as the Gittins index approach, suggesting that there is room for improvement by using better RL algorithms.
3.2 TABULAR MDPs
The bandit problem provides a natural and simple setting to investigate whether the policy learns to trade off between exploration and exploitation. However, the problem itself involves no sequen- tial decision making, and does not fully characterize the challenges in solving MDPs. Hence, we perform further experiments using randomly generated tabular MDPs, where there is a finite num- ber of possible states and actionsâsmall enough that the transition probability distribution can be explicitly given as a table. We compare our approach with the following methods:
e Random: the agent chooses an action uniformly at random for each time step; | 1611.02779#19 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 20 | e Random: the agent chooses an action uniformly at random for each time step;
e PSRL (Strens, 2000; Osband et al., 2013): this is a direct generalization of Thompson sam- pling to MDPs, where at the beginning of each episode, we sample an MDP from the pos- terior distribution, and take actions according to the optimal policy for the entire episode. Similarly, we include an optimistic variant (OPSRL), which has also been explored in Os- band & Van Roy (2016).
e BEB (Kolter & Ng, 2009): this is a model-based optimistic algorithm that adds an explo- ration bonus to (thus far) infrequently visited states and actions.
Under review as a conference paper at ICLR 2017
e UCRL2 (Jaksch et al., 2010): this algorithm computes, at each iteration, the optimal pol- icy against an optimistic MDP under the current belief, using an extended value iteration procedure.
e «-Greedy: this algorithm takes actions optimal against the MAP estimate according to the current posterior, which is updated once per episode.
e Greedy: a special case of e-Greedy with « = 0.
# Table 2: Random MDP Results | 1611.02779#20 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 21 | e Greedy: a special case of e-Greedy with « = 0.
# Table 2: Random MDP Results
Setup Random PSRL OPSRL UCRL2 BEB «-Greedy Greedy RL? n=10 100.1 138.1 144.1 146.6 150.2 132.8 134.8 156.2 n=25 250.2 408.8 425.2 424.1 427.8 377.3 368.8 445.7 n=50 499.7 904.4 930.7 918.9 917.8 823.3 769.3 936.1 n=75 749.9 1417.1 1449.2 1427.6 1422.6 1293.9 1172.9 1428.8 n=100 999.4 1939.5 1973.9 1942.1 1935.1 1778.2 1578.5 1913.7
The distribution over MDPs is constructed with |S| = 10, |A| = 5. The rewards follow a Gaus- sian distribution with unit variance, and the mean parameters are sampled independently from Normal(1,1). The transitions are sampled from a flat Dirichlet distribution. This construction matches the commonly used prior in Bayesian RL methods. We set the horizon for each episode to be T = 10, and an episode always starts on the first state. | 1611.02779#21 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 22 | g a E 5 z 0 1000 5000 Iteration
Figure 3: RL? learning curves for tabular MDPs. Performance is normalized such that OPSRL scores 1, and random policy scores 0.
The results are summarized in Table 2, and the learning curves are shown in Figure 3. We follow the same evaluation procedure as in the bandit case. We experiment with n ⬠{10, 25,50, 75, 100}. For fewer episodes, our approach surprisingly outperforms existing methods by a large margin. The advantage is reversed as n increases, suggesting that the reinforcement learning problem in the outer loop becomes more challenging to solve. We think that the advantage for small n comes from the need for more aggressive exploitation: since there are 140 degrees of freedom to estimate in order to characterize the MDP, and by the 10th episode, we will not have enough samples to form a good estimate of the entire dynamics. By directly optimizing the RNN in this setting, our approach should be able to cope with this shortage of samples, and decides to exploit sooner compared to the reference algorithms.
3.3. VISUAL NAVIGATION
The previous two tasks both only involve very low-dimensional state spaces. To evaluate the fea- sibility of scaling up RLâ, we further experiment with a challenging vision-based task, where the | 1611.02779#22 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 23 | Under review as a conference paper at ICLR 2017 agent is asked to navigate a randomly generated maze to find a randomly placed targetâ. The agent receives a +1 reward when it reaches the target, â0.001 when it hits the wall, and â0.04 per time step to encourage it to reach targets faster. It can interact with the maze for multiple episodes, dur- ing which the maze structure and target position are held fixed. The optimal strategy is to explore the maze efficiently during the first episode, and after locating the target, act optimally against the current maze and target based on the collected information. An illustration of the task is given in Figure 4. de (a) Sample observation (b) Layout of the 5 x 5 maze in (a) (c) Layout of a9 x 9 maze Figure 4: Visual navigation. The target block is shown in red, and occupies an entire grid in the maze layout. Visual navigation alone is a challenging task for reinforcement learning. The agent only receives very sparse rewards during training, and does not have the primitives for efficient exploration at the beginning of training. It also needs to make efficient use of memory to decide how it should explore the space, without forgetting | 1611.02779#23 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 24 | does not have the primitives for efficient exploration at the beginning of training. It also needs to make efficient use of memory to decide how it should explore the space, without forgetting about where it has already explored. Previously, Oh et al. (2016) have studied similar vision-based navigation tasks in Minecraft. However, they use higher-level actions for efficient navigation. Similar high-level actions in our task would each require around 5 low-level actions combined in the right way. In contrast, our RL? agent needs to learn these higher-level actions from scratch. We use a simple training setup, where we use small mazes of size 5 x 5, with 2 episodes of interac- tion, each with horizon up to 250. Here the size of the maze is measured by the number of grid cells along each wall in a discrete representation of the maze. During each trial, we sample 1| out of 1000 randomly generated configurations of map layout and target positions. During testing, we evaluate on 1000 separately generated configurations. In addition, we also study its extrapolation behavior along two axes, by (1) testing on large mazes of size 9 x 9 (see Figure 4c) and (2) running the agent for up to 5 episodes in both small and large mazes. For the large maze, we also | 1611.02779#24 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 25 | of size 9 x 9 (see Figure 4c) and (2) running the agent for up to 5 episodes in both small and large mazes. For the large maze, we also increase the horizon per episode by 4x due to the increased size of the maze. Table 3: Results for visual navigation. These metrics are computed using the best run among all runs shown in Figure 5. In 3c, we measure the proportion of mazes where the trajectory length in the second episode does not exceed the trajectory length in the first episode. (a) Average length of successful trajectories (b) %Success (c) %Improved Episode Small Large Episode Small Large Small Large 1 1.3 180.1+6.0 1 99.3% 97.1% 91.7% 71.4% 2 0.9 151.8+45.9 2 99.6% 96.7% 3 1.0 169. 6.3 3 99.7% 95.8% 4 1.1 162. 6.4 4 99.4% 95.6% 5 1.1 169.346.5 5 99.6% 96.1% ?Videos for the task are available at https: //goo.gl/rDDBpb. | 1611.02779#25 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 27 | Under review as a conference paper at ICLR 2017 Total reward 0 300 rood 1500 ~â«2000~=~=«S00~=«S000~=S=SC« S00, Iteration Figure 5: RL? learning curves for visual navigation. Each curve shows a different random initial- ization of the RNN weights. Performance varies greatly across different initializations. The results are summarized in Table 3, and the learning curves are shown in Figure 5. We observe that there is a significant reduction in trajectory lengths between the first two episodes in both the smaller and larger mazes, suggesting that the agent has learned how to use information from past episodes. It also achieves reasonable extrapolation behavior in further episodes by maintaining its performance, although there is a small drop in the rate of success in the larger mazes. We also observe that on larger mazes, the ratio of improved trajectories is lower, likely because the agent has not learned how to act optimally in the larger mazes. Still, even on the small mazes, the agent does not learn to perfectly reuse prior information. An illustration of the agentâs behavior is shown in Figure 6. The intended behavior, which occurs most frequently, as shown in 6a and 6b, | 1611.02779#27 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 28 | information. An illustration of the agentâs behavior is shown in Figure 6. The intended behavior, which occurs most frequently, as shown in 6a and 6b, is that the agent should remember the targetâs location, and utilize it to act optimally in the second episode. However, occasionally the agent forgets about where the target was, and continues to explore in the second episode, as shown in 6c and 6d. We believe that better reinforcement learning techniques used as the outer-loop algorithm will improve these results in the future. \ 4 | Uy] (a) Good behavior, Ist (b) Good behavior, 2nd (c) Bad behavior, Ist (d) Bad behavior, 2nd episode episode episode episode Figure 6: Visualization of the agentâs behavior. In each scenario, the agent starts at the center of the blue block, and the goal is to reach anywhere in the red block. 4 RELATED WORK The concept of using prior experience to speed up reinforcement learning algorithms has been ex- plored in the past in various forms. Earlier studies have investigated automatic tuning of hyper- parameters, such as learning rate and temperature (Ishii et al., 2002; Schweighofer & Doya, 2003), as a form of meta-learning. Wilson et al. | 1611.02779#28 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 29 | such as learning rate and temperature (Ishii et al., 2002; Schweighofer & Doya, 2003), as a form of meta-learning. Wilson et al. (2007) use hierarchical Bayesian methods to maintain a posterior over possible models of dynamics, and apply optimistic Thompson sampling according to the posterior. Many works in hierarchical reinforcement learning propose to extract reusable skills from previous tasks to speed up exploration in new tasks (Singh, 1992; Perkins et al., 1999). We refer the reader to Taylor & Stone (2009) for a more thorough survey on the multi-task and transfer learning aspects. | 1611.02779#29 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 30 | Under review as a conference paper at ICLR 2017
More recently, Fu et al. (2015) propose a model-based approach on top of iLQG with unknown dynamics (Levine & Abbeel, 2014), which uses samples collected from previous tasks to build a neural network prior for the dynamics, and can perform one-shot learning on new, but related tasks thanks to reduced sample complexity. There has been a growing interest in using deep neural networks for multi-task learning and transfer learning (Parisotto et al., 2015; Rusu et al., 2015; 2016a; Devin et al., 2016; Rusu et al., 201 6b). | 1611.02779#30 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 31 | In the broader context of machine learning, there has been a lot of interest in one-shot learning for object classification (Vilalta & Drissi, 2002; Fei-Fei et al., 2006; Larochelle et al., 2008; Lake et al., 2011; Koch, 2015). Our work draws inspiration from a particular line of work (Younger et al., 2001; Santoro et al., 2016; Vinyals et al., 2016), which formulates meta-learning as an optimization problem, and can thus be optimized end-to-end via gradient descent. While these work applies to the supervised learning setting, our work applies in the more general reinforcement learning setting. Although the reinforcement learning setting is more challenging, the resulting behavior is far richer: our agent must not only learn to exploit existing information, but also learn to explore, a problem that is usually not a factor in supervised learning. Another line of work (Hochreiter et al., 2001; Younger et al., 2001; Andrychowicz et al., 2016; Li & Malik, 2016) studies meta-learning over the optimization process. There, the meta-learner makes explicit updates to a parametrized model. In comparison, we do not use a directly parametrized policy; instead, the recurrent neural network agent acts as the meta-learner and the resulting policy simultaneously. | 1611.02779#31 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 32 | Our formulation essentially constructs a partially observable MDP (POMDP) which is solved in the outer loop, where the underlying MDP is unobserved by the agent. This reduction of an unknown MDP to a POMDP can be traced back to dual control theory (Feldbaum, 1960), where âdualâ refers to the fact that one is controlling both the state and the state estimate. Feldbaum pointed out that the solution can in principle be computed with dynamic programming, but doing so is usually im- practical. POMDPs with such structure have also been studied under the name âmixed observability MDPsâ (Ong et al., 2010). However, the method proposed there suffers from the usual challenges of solving POMDPs in high dimensions.
# 5 DISCUSSION
This paper suggests a different approach for designing better reinforcement learning algorithms: instead of acting as the designers ourselves, learn the algorithm end-to-end using standard rein- forcement learning techniques. That is, the âfastâ RL algorithm is a computation whose state is stored in the RNN activations, and the RNNâs weights are learned by a general-purpose âslowâ re- inforcement learning algorithm. Our method, RL?, has demonstrated competence comparable with theoretically optimal algorithms in small-scale settings. We have further shown its potential to scale to high-dimensional tasks. | 1611.02779#32 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 33 | In the experiments, we have identified opportunities to improve upon RL?: the outer-loop reinforce- ment learning algorithm was shown to be an immediate bottleneck, and we believe that for settings with extremely long horizons, better architecture may also be required for the policy. Although we have used generic methods and architectures for the outer-loop algorithm and the policy, doing this also ignores the underlying episodic structure. We expect algorithms and policy architectures that exploit the problem structure to significantly boost the performance.
ACKNOWLEDGMENTS
We would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. This research was funded in part by ONR through a PECASE award. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by a Berkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF through grant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through the ARC Centre of Excellence for Mathematical and Statistical Frontiers.
# REFERENCES
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint
Under review as a conference paper at ICLR 2017
arXiv: 1606.04474, 2016. | 1611.02779#33 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 34 | Under review as a conference paper at ICLR 2017
arXiv: 1606.04474, 2016.
Jean-Yves Audibert and Rémi Munos. Introduction to bandits: Algorithms and theory. JCML Tutorial on bandits, 2011.
Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 3(Nov):397-422, 2002.
Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. JEEE transactions on neural networks, 5(2):157-166, 1994.
Sébastien Bubeck and Nicolo Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi- armed bandit problems. arXiv preprint arXiv: 1204.5721, 2012.
Jhelum Chakravorty and Aditya Mahajan. Multi-armed bandits, gittins index, and its calculation. Methods and Applications of Statistics in Clinical Trials: Planning, Analysis, and Inferential Methods, 2:416-435, 2013.
Olivier Chapelle and Lihong Li. An empirical evaluation of thompson sampling. In Advances in neural information processing systems, pp. 2249-2257, 2011. | 1611.02779#34 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 35 | Olivier Chapelle and Lihong Li. An empirical evaluation of thompson sampling. In Advances in neural information processing systems, pp. 2249-2257, 2011.
Kyunghyun Cho, Bart Van Merriénboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv: 1412.3555, 2014.
Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pp. 465-472, 2011.
Coline Devin, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, and Sergey Levine. Learning modular neural network policies for multi-task and multi-robot transfer. arXiv preprint arXiv: 1609.07088, 2016. | 1611.02779#35 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 36 | Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. JEEE transactions on pattern analysis and machine intelligence, 28(4):594â611, 2006.
AA Feldbaum. Dual control theory. i. Avtomatika i Telemekhanika, 21(9):1240-1249, 1960.
Justin Fu, Sergey Levine, and Pieter Abbeel. One-shot learning of manipulation skills with online dynamics adaptation and neural network priors. arXiv preprint arXiv: 1509.06841, 2015.
Mohammad Ghavamzadeh, Shie Mannor, Joelle Pineau, Aviv Tamar, et al. Bayesian reinforcement learning: a survey. World Scientific, 2015.
John Gittins, Kevin Glazebrook, and Richard Weber. Multi-armed bandit allocation indices. John Wiley & Sons, 2011.
John C Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society. Series B (Methodological), pp. 148-177, 1979.
Xiaoxiao Guo, Satinder Singh, Honglak Lee, Richard L Lewis, and Xiaoshi Wang. Deep learning for real-time atari game play using offline monte-carlo tree search planning. In Advances in neural information processing systems, pp. 3338-3346, 2014. | 1611.02779#36 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 37 | Nicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pp. 2944-2952, 2015.
Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pp. 87-94. Springer, 2001.
10
Under review as a conference paper at ICLR 2017
Shin Ishii, Wako Yoshida, and Junichiro Yoshimoto. Control of exploitationâexploration meta- parameter in reinforcement learning. Neural networks, 15(4):665â687, 2002.
Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563â1600, 2010.
Rafal Jézefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recur- rent network architectures. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 2342-2350, 2015. URL http: //jmlr.org/proceedings/papers/v37/jozefowicz15.html. | 1611.02779#37 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 38 | Michat Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Jaskowski. Viz- doom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv: 1605.02097, 2016.
Gregory Koch. Siamese neural networks for one-shot image recognition. PhD thesis, University of Toronto, 2015.
J Zico Kolter and Andrew Y Ng. Near-bayesian exploration in polynomial time. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 513-520. ACM, 2009.
Brenden M Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B Tenenbaum. One shot learning of simple visual concepts. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, volume 172, pp. 2, 2011.
Hugo Larochelle, Dumitru Erhan, and Yoshua Bengio. Zero-data learning of new tasks. In AAAI, volume 1, pp. 3, 2008.
Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems, pp. 1071-1079, 2014. | 1611.02779#38 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 39 | Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo- motor policies. Journal of Machine Learning Research, 17(39):1â40, 2016.
Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv: 1606.01885, 2016.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv: 1509.02971, 2015.
Benedict C May, Nathan Korda, Anthony Lee, and David S Leslie. Optimistic bayesian sampling in contextual-bandit problems. Journal of Machine Learning Research, 13(Jun):2069-2106, 2012.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle- mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. | 1611.02779#39 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 40 | Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception, and action in minecraft. arXiv preprint arXiv: 1605.09128, 2016.
Sylvie CW Ong, Shao Wei Png, David Hsu, and Wee Sun Lee. Planning under uncertainty for robotic tasks with mixed observability. The International Journal of Robotics Research, 29(8): 1053-1068, 2010.
Jan Osband and Benjamin Van Roy. Why is posterior sampling better than optimism for reinforce- ment learning. arXiv preprint arXiv: 1607.00215, 2016.
Jan Osband, Dan Russo, and Benjamin Van Roy. (more) efficient reinforcement learning via poste- rior sampling. In Advances in Neural Information Processing Systems, pp. 3003-3011, 2013.
Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforcement learning. arXiv preprint arXiv: 1511.06342, 2015.
11
Under review as a conference paper at ICLR 2017
Theodore J Perkins, Doina Precup, et al. Using options for knowledge transfer in reinforcement learning. University of Massachusetts, Amherst, MA, USA, Tech. Rep, 1999. | 1611.02779#40 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 41 | Theodore J Perkins, Doina Precup, et al. Using options for knowledge transfer in reinforcement learning. University of Massachusetts, Amherst, MA, USA, Tech. Rep, 1999.
Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirk- patrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distil- lation. arXiv preprint arXiv:1511.06295, 2015.
Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv: 1606.04671, 2016a.
Andrei A Rusu, Matej Vecerik, Thomas Rothérl, Nicolas Heess, Razvan Pascanu, and Raia Hadsell. Sim-to-real robot learning from pixels with progressive nets. arXiv preprint arXiv: 1610.04286, 2016b. | 1611.02779#41 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 42 | Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One- shot learning with memory-augmented neural networks. arXiv preprint arXiv: 1605.06065, 2016.
John Schulman, Sergey Levine, Philipp Moritz, Michael I Jordan, and Pieter Abbeel. Trust region policy optimization. CoRR, abs/1502.05477, 2015.
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- dimensional continuous control using generalized advantage estimation. In Jnternational Con- ference on Learning Representations (ICLR2016), 2016.
Nicolas Schweighofer and Kenji Doya. Meta-learning in reinforcement learning. Neural Networks, 16(1):5-9, 2003.
Satinder Pal Singh. Transfer of learning by composing solutions of elemental sequential tasks. Machine Learning, 8(3-4):323-339, 1992.
Malcolm Strens. A bayesian framework for reinforcement learning. In JCML, pp. 943-950, 2000.
Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(Jul):1633â1685, 2009. | 1611.02779#42 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 43 | Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(Jul):1633â1685, 2009.
William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285-294, 1933.
Ricardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. Artificial Intelligence Review, 18(2):77-95, 2002.
Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Match- ing networks for one shot learning. arXiv preprint arXiv: 1606.04080, 2016.
Niklas Wahlstro6m, Thomas B Schon, and Marc Peter Deisenroth. From pixels to torques: Policy learning with deep dynamical models. arXiv preprint arXiv: 1502.02251, 2015.
Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems, pp. 2746-2754, 2015.
Peter Whittle. Optimization over time. John Wiley & Sons, Inc., 1982. | 1611.02779#43 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 44 | Peter Whittle. Optimization over time. John Wiley & Sons, Inc., 1982.
Aaron Wilson, Alan Fern, Soumya Ray, and Prasad Tadepalli. Multi-task reinforcement learning: a hierarchical bayesian approach. In Proceedings of the 24th international conference on Machine learning, pp. 1015-1022. ACM, 2007.
A Steven Younger, Sepp Hochreiter, and Peter R Conwell. Meta-learning with backpropagation. In Neural Networks, 2001. Proceedings. IJCNNâ0O1. International Joint Conference on, volume 3. IEEE, 2001.
12
Under review as a conference paper at ICLR 2017
# APPENDIX
# A DETAILED EXPERIMENT SETUP
Common to all experiments: as mentioned in Section 2.2, we use placeholder values when neces- sary. For example, at t = 0 there is no previous action, reward, or termination flag. Since all of our experiments use discrete actions, we use the embedding of the action 0 as a placeholder for actions, and 0 for both the rewards and termination flags. To form the input to the GRU, we use the values for the rewards and termination flags as-is, and embed the states and actions as described separately below for each experiments. These values are then concatenated together to form the joint embedding. | 1611.02779#44 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 45 | For the neural network architecture, We use rectified linear units throughout the experiments as the hidden activation, and we apply weight normalization without data-dependent initialization (Sali- mans & Kingma, 2016) to all weight matrices. The hidden-to-hidden weight matrix uses an orthog- onal initialization (Saxe et al., 2013), and all other weight matrices use Xavier initialization (Glorot & Bengio, 2010). We initialize all bias vectors to 0. Unless otherwise mentioned, the policy and the baseline uses separate neural networks with the same architecture until the final layer, where the number of outputs differ.
All experiments are implemented using TensorFlow (Abadi et al., 2016) and rllab (Duan et al., 2016). We use the implementations of classic algorithms provided by the TabulaRL package (Os- band, 2016).
A.1 MULTI-ARMED BANDITS
The parameters for TRPO are shown in Table 1. Since the environment is stateless, we use a constant embedding 0 as a placeholder in place of the states, and a one-hot embedding for the actions.
Table 1: Hyperparameters for TRPO: multi-armed bandits
Discount 0.99 GAE X 0.3 Policy Iters | Up to 1000 #GRU Units | 256 Mean KL 0.01 Batch size 250000
A.2. TABULAR MDPs | 1611.02779#45 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 46 | A.2. TABULAR MDPs
The parameters for TRPO are shown in Table 2. We use a one-hot embedding for the states and actions separately, which are then concatenated together.
Table 2: Hyperparameters for TRPO: tabular MDPs
Discount 0.99 GAE \ 0.3 Policy Iters | Up to 10000 #GRU Units | 256 Mean KL 0.01 Batch size 250000
A.3 VISUAL NAVIGATION
The parameters for TRPO are shown in Table 3. For this task, we use a neural network to form the joint embedding. We rescale the images to have width 40 and height 30 with RGB channels preserved, and we recenter the RGB values to lie within range [â1, 1]. Then, this preprocessed
13
Under review as a conference paper at ICLR 2017
image is passed through 2 convolution layers, each with 16 filters of size 5 x 5 and stride 2. The action is first embedded into a 256-dimensional vector where the embedding is learned, and then concatenated with the flattened output of the final convolution layer. The joint vector is then fed to a fully connected layer with 256 hidden units. | 1611.02779#46 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 47 | Unlike previous experiments, we let the policy and the baseline share the same neural network. We found this to improve the stability of training baselines and also the end performance of the policy, possibly due to regularization effects and better learned features imposed by weight sharing. Similar weight-sharing techniques have also been explored in (Mnih et al., 2016).
Table 3: Hyperparameters for TRPO: visual navigation
Discount 0.99 GAE X 0.99 Policy Iters | Up to 5000 #GRU Units | 256 Mean KL 0.01 Batch size 50000
# REFERENCES
Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv: 1603.04467, 2016.
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. arXiv preprint arXiv: 1604.06778, 2016.
Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249-256, 2010. | 1611.02779#47 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.02779 | 48 | Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249-256, 2010.
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv: 1602.01783, 2016.
Tan Osband. TabulaRL. https://github.com/iosband/TabulaRL, 2016.
Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to ac- celerate training of deep neural networks. arXiv preprint arXiv: 1602.07868, 2016.
Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynam- ics of learning in deep linear neural networks. arXiv preprint arXiv: 1312.6120, 2013.
14 | 1611.02779#48 | RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning | Deep reinforcement learning (deep RL) has been successful in learning
sophisticated behaviors automatically; however, the learning process requires a
huge number of trials. In contrast, animals can learn new tasks in just a few
trials, benefiting from their prior knowledge about the world. This paper seeks
to bridge this gap. Rather than designing a "fast" reinforcement learning
algorithm, we propose to represent it as a recurrent neural network (RNN) and
learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in
the weights of the RNN, which are learned slowly through a general-purpose
("slow") RL algorithm. The RNN receives all information a typical RL algorithm
would receive, including observations, actions, rewards, and termination flags;
and it retains its state across episodes in a given Markov Decision Process
(MDP). The activations of the RNN store the state of the "fast" RL algorithm on
the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both
small-scale and large-scale problems. On the small-scale side, we train it to
solve randomly generated multi-arm bandit problems and finite MDPs. After
RL$^2$ is trained, its performance on new MDPs is close to human-designed
algorithms with optimality guarantees. On the large-scale side, we test RL$^2$
on a vision-based navigation task and show that it scales up to
high-dimensional problems. | http://arxiv.org/pdf/1611.02779 | Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | 14 pages. Under review as a conference paper at ICLR 2017 | null | cs.AI | 20161109 | 20161110 | [
{
"id": "1511.06295"
}
] |
1611.01989 | 0 | 7 1 0 2
r a M 8 ] G L . s c [
2 v 9 8 9 1 0 . 1 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# DEEPCODER: LEARNING TO WRITE PROGRAMS
Matej Balogâ Department of Engineering University of Cambridge
Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow Microsoft Research
# ABSTRACT
We develop a ï¬rst line of attack for solving programming competition-style prob- lems from input-output examples using deep learning. The approach is to train a neural network to predict properties of the program that generated the outputs from the inputs. We use the neural networkâs predictions to augment search techniques from the programming languages community, including enumerative search and an SMT-based solver. Empirically, we show that our approach leads to an order of magnitude speedup over the strong non-augmented baselines and a Recurrent Neural Network approach, and that we are able to solve problems of difï¬culty comparable to the simplest problems on programming competition websites.
# INTRODUCTION | 1611.01989#0 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 0 | 7 1 0 2
y a M 2 1 ] G L . s c [
4 v 3 6 1 2 0 . 1 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# UNROLLED GENERATIVE ADVERSARIAL NETWORKS
Luke Metzâ Google Brain [email protected]
Ben Pooleâ Stanford University [email protected]
David Pfau Google DeepMind [email protected]
Jascha Sohl-Dickstein Google Brain [email protected]
# ABSTRACT
We introduce a method to stabilize Generative Adversarial Networks (GANs) by deï¬ning the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal dis- criminator in the generatorâs objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.
# INTRODUCTION | 1611.02163#0 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 0 | 7 1 0 2
b e F 7 2 ] G L . s c [
3 v 7 4 2 2 0 . 1 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
Q-PROP: SAMPLE-EFFICIENT POLICY GRADIENT WITH AN OFF-POLICY CRITIC
# Shixiang Gu!ââ, Timothy Lillicrap*, Zoubin Ghahramani'®, Richard E. Turner!, Sergey Levine*
Shixiang Gu!ââ, Timothy Lillicrap*, Zoubin Ghahramani'®, Richard E. Turner!, Sergey [email protected], [email protected], [email protected], [email protected],[email protected] âUniversity of Cambridge, UK ?Max Planck Institute for Intelligent Systems, Tiibingen, Germany 3Google Brain, USA 4DeepMind, UK SUC Berkeley, USA Uber AI Labs, USA
# ABSTRACT | 1611.02247#0 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 1 | # INTRODUCTION
A dream of artiï¬cial intelligence is to build systems that can write computer programs. Recently, there has been much interest in program-like neural network models (Graves et al., 2014; Weston et al., 2015; Kurach et al., 2015; Joulin & Mikolov, 2015; Grefenstette et al., 2015; Sukhbaatar et al., 2015; Neelakantan et al., 2016; Kaiser & Sutskever, 2016; Reed & de Freitas, 2016; Zaremba et al., 2016; Graves et al., 2016), but none of these can write programs; that is, they do not generate human-readable source code. Only very recently, Riedel et al. (2016); Bunel et al. (2016); Gaunt et al. (2016) explored the use of gradient descent to induce source code from input-output examples via differentiable interpreters, and Ling et al. (2016) explored the generation of source code from unstructured text descriptions. However, Gaunt et al. (2016) showed that differentiable interpreter- based program induction is inferior to discrete search-based techniques used by the programming languages community. We are then left with the question of how to make progress on program induction using machine learning techniques. | 1611.01989#1 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 1 | # INTRODUCTION
The use of deep neural networks as generative models for complex data has made great advances in recent years. This success has been achieved through a surprising diversity of training losses and model architectures, including denoising autoencoders (Vincent et al., 2010), variational au- toencoders (Kingma & Welling, 2013; Rezende et al., 2014; Gregor et al., 2015; Kulkarni et al., 2015; Burda et al., 2015; Kingma et al., 2016), generative stochastic networks (Alain et al., 2015), diffusion probabilistic models (Sohl-Dickstein et al., 2015), autoregressive models (Theis & Bethge, 2015; van den Oord et al., 2016a;b), real non-volume preserving transformations (Dinh et al., 2014; 2016), Helmholtz machines (Dayan et al., 1995; Bornschein et al., 2015), and Generative Adversar- ial Networks (GANs) (Goodfellow et al., 2014).
1.1 GENERATIVE ADVERSARIAL NETWORKS | 1611.02163#1 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 1 | # ABSTRACT
Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer pro- gram is a far more challenging task. In recent years, extensive research was carried out in the ï¬eld of reinforcement learning and numerous algorithms were intro- duced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outper- form humans. In this paper we introduce a new learning environment, the Retro Learning Environment â RLE, that can run games on the Super Nintendo Enter- tainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE. Moreover, RLE is compatible with Python and Torch. SNES games pose a signif- icant challenge to current algorithms due to their higher level of complexity and versatility.
# INTRODUCTION | 1611.02205#1 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 1 | # ABSTRACT
Model-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is their high sample complexity. Batch policy gradient methods offer stable learning, but at the cost of high variance, which often requires large batches. TD-style methods, such as off-policy actor-critic and Q-learning, are more sample-efficient but biased, and often require costly hyperparameter sweeps to stabilize. In this work, we aim to develop methods that combine the stability of policy gradients with the efficiency of off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor expansion of the off-policy critic as a control variate. Q-Prop is both sample efficient and stable, and effectively combines the benefits of on-policy and off-policy methods. We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to de- rive two variants of Q-Prop with conservative and aggressive adaptation. We show that conservative Q-Prop provides substantial gains in sample efficiency over trust region policy optimization (TRPO) with generalized advantage estimation (GAE), and improves stability over deep deterministic policy gradient (DDPG), the state- of-the-art on-policy and off-policy methods, on OpenAI Gymâs MuJoCo continu- ous control environments. | 1611.02247#1 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 2 | In this work, we propose two main ideas: (1) learn to induce programs; that is, use a corpus of program induction problems to learn strategies that generalize across problems, and (2) integrate neural network architectures with search-based techniques rather than replace them.
In more detail, we can contrast our approach to existing work on differentiable interpreters. In dif- ferentiable interpreters, the idea is to deï¬ne a differentiable mapping from source code and inputs to outputs. After observing inputs and outputs, gradient descent can be used to search for a pro- gram that matches the input-output examples. This approach leverages gradient-based optimization, which has proven powerful for training neural networks, but each synthesis problem is still solved independentlyâsolving many synthesis problems does not help to solve the next problem. | 1611.01989#2 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 2 | 1.1 GENERATIVE ADVERSARIAL NETWORKS
While most deep generative models are trained by maximizing log likelihood or a lower bound on log likelihood, GANs take a radically different approach that does not require inference or explicit calculation of the data likelihood. Instead, two models are used to solve a minimax game: a genera- tor which samples data, and a discriminator which classiï¬es the data as real or generated. In theory these models are capable of modeling an arbitrarily complex probability distribution. When using the optimal discriminator for a given class of generators, the original GAN proposed by Goodfellow et al. minimizes the Jensen-Shannon divergence between the data distribution and the generator, and extensions generalize this to a wider class of divergences (Nowozin et al., 2016; Sonderby et al., 2016; Poole et al., 2016). | 1611.02163#2 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 2 | Controlling artiï¬cial agents using only raw high-dimensional input data such as image or sound is a difï¬cult and important task in the ï¬eld of Reinforcement Learning (RL). Recent breakthroughs in the ï¬eld allow its utilization in real-world applications such as autonomous driving (Shalev-Shwartz et al., 2016), navigation (Bischoff et al., 2013) and more. Agent interaction with the real world is usually either expensive or not feasible, as the real world is far too complex for the agent to perceive. Therefore in practice the interaction is simulated by a virtual environment which receives feedback on a decision made by the algorithm. Traditionally, games were used as a RL environment, dating back to Chess (Campbell et al., 2002), Checkers (Schaeffer et al., 1992), backgammon (Tesauro, 1995) and the more recent Go (Silver et al., 2016). Modern games often present problems and tasks which are highly correlated with real-world problems. For example, an agent that masters a racing game, by observing a simulated driverâs view screen as input, may be usefull for the development of an autonomous driver. For high-dimensional input, the leading benchmark is the Arcade Learning Environment | 1611.02205#2 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.01989 | 3 | We argue that machine learning can provide signiï¬cant value towards solving Inductive Program Synthesis (IPS) by re-casting the problem as a big data problem. We show that training a neural network on a large number of generated IPS problems to predict cues from the problem description can help a search-based technique. In this work, we focus on predicting an order on the program space and show how to use it to guide search-based techniques that are common in the programming languages community. This approach has three desirable properties: ï¬rst, we transform a difï¬cult search problem into a supervised learning problem; second, we soften the effect of failures of the neural network by searching over program space rather than relying on a single prediction; and third, the neural networkâs predictions are used to guide existing program synthesis systems, allowing us to use and improve on the best solvers from the programming languages community. Empirically, we
âAlso afï¬liated with Max-Planck Institute for Intelligent Systems, T¨ubingen, Germany. Work done while author was an intern at Microsoft Research.
1
Published as a conference paper at ICLR 2017
show orders-of-magnitude improvements over optimized standard search techniques and a Recurrent Neural Network-based approach to the problem. | 1611.01989#3 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 3 | The ability to train extremely ï¬exible generating functions, without explicitly computing likeli- hoods or performing inference, and while targeting more mode-seeking divergences as made GANs extremely successful in image generation (Odena et al., 2016; Salimans et al., 2016; Radford et al., 2015), and image super resolution (Ledig et al., 2016). The ï¬exibility of the GAN framework has also enabled a number of successful extensions of the technique, for instance for structured predic- tion (Reed et al., 2016a;b; Odena et al., 2016), training energy based models (Zhao et al., 2016), and combining the GAN loss with a mutual information loss (Chen et al., 2016).
âWork done as a member of the Google Brain Residency program (g.co/brainresidency) â Work completed as part of a Google Brain internship
1
Published as a conference paper at ICLR 2017 | 1611.02163#3 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 3 | a simulated driverâs view screen as input, may be usefull for the development of an autonomous driver. For high-dimensional input, the leading benchmark is the Arcade Learning Environment (ALE) (Bellemare et al., 2013) which provides a common interface to dozens of Atari 2600 games, each presents a different challenge. ALE provides an extensive benchmarking plat- form, allowing a controlled experiment setup for algorithm evaluation and comparison. The main challenge posed by ALE is to successfully play as many Atari 2600 games as possible (i.e., achiev- ing a score higher than an expert human player) without providing the algorithm any game-speciï¬c information (i.e., using the same input available to a human - the game screen and score). A key work to tackle this problem is the Deep Q-Networks algorithm (Mnih et al., 2015), which made a breakthrough in the ï¬eld of Deep Reinforcement Learning by achieving human level performance on 29 out of 49 games. In this work we present a new environment â the Retro Learning Environ- ment (RLE). RLE sets new challenges by providing a uniï¬ed interface for Atari 2600 games as well as more advanced gaming consoles. As a start we focused on the Super | 1611.02205#3 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 3 | Model-free reinforcement learning is a promising approach for solving arbitrary goal-directed se- quential decision-making problems with only high-level reward signals and no supervision. It has recently been extended to utilize large neural network policies and value functions, and has been shown to be successful in solving a range of difficult problems (Mnih et al., 2015; Schulman et al., 2015; Lillicrap et al., 2016; Silver et al., 2016; Gu et al., 2016b; Mnih et al., 2016). Deep neural network parametrization minimizes the need for manual feature and policy engineering, and allows learning end-to-end policies mapping from high-dimensional inputs, such as images, directly to ac- tions. However, such expressive parametrization also introduces a number of practical problems. Deep reinforcement learning algorithms tend to be sensitive to hyperparameter settings, often re- quiring extensive hyperparameter sweeps to find good values. Poor hyperparameter settings tend to produce unstable or non-convergent learning. Deep RL algorithms also tend to exhibit high sample complexity, often to the point of being impractical to run on real physical systems. Although a num- ber of recent techniques have sought to alleviate some of these issues (Hasselt, 2010; Mnih et al., 2015; Schulman et al., 2015; 2016), these recent advances still provide only a partial solution to the instability and sample complexity challenges. | 1611.02247#3 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 4 | 1
Published as a conference paper at ICLR 2017
show orders-of-magnitude improvements over optimized standard search techniques and a Recurrent Neural Network-based approach to the problem.
In summary, we deï¬ne and instantiate a framework for using deep learning for program synthesis problems like ones appearing on programming competition websites. Our concrete contributions are:
1. deï¬ning a programming language that is expressive enough to include real-world program- ming problems while being high-level enough to be predictable from input-output exam- ples;
2. models for mapping sets of input-output examples to program properties; and
3. experiments that show an order of magnitude speedup over standard program synthesis techniques, which makes this approach feasible for solving problems of similar difï¬culty as the simplest problems that appear on programming competition websites.
2 BACKGROUND ON INDUCTIVE PROGRAM SYNTHESIS
We begin by providing background on Inductive Program Synthesis, including a brief overview of how it is typically formulated and solved in the programming languages community.
The Inductive Program Synthesis (IPS) problem is the following: given input-output examples, produce a program that has behavior consistent with the examples. | 1611.01989#4 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 4 | 1
Published as a conference paper at ICLR 2017
In practice, however, GANs suffer from many issues, particularly during training. One common failure mode involves the generator collapsing to produce only a single sample or a small family of very similar samples. Another involves the generator and discriminator oscillating during training, rather than converging to a ï¬xed point. In addition, if one agent becomes much more powerful than the other, the learning signal to the other agent becomes useless, and the system does not learn. To train GANs many tricks must be employed, such as careful selection of architectures (Radford et al., 2015), minibatch discrimination (Salimans et al., 2016), and noise injection (Salimans et al., 2016; Sonderby et al., 2016). Even with these tricks the set of hyperparameters for which training is successful is generally very small in practice. | 1611.02163#4 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.01989 | 5 | The Inductive Program Synthesis (IPS) problem is the following: given input-output examples, produce a program that has behavior consistent with the examples.
Building an IPS system requires solving two problems. First, the search problem: to ï¬nd consistent programs we need to search over a suitable set of possible programs. We need to deï¬ne the set (i.e., the program space) and search procedure. Second, the ranking problem: if there are multiple programs consistent with the input-output examples, which one do we return? Both of these prob- lems are dependent on the speciï¬cs of the problem formulation. Thus, the ï¬rst important decision in formulating an approach to program synthesis is the choice of a Domain Speciï¬c Language. | 1611.01989#5 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 5 | Once converged, the generative models produced by the GAN training procedure normally do not cover the whole distribution (Dumoulin et al., 2016; Che et al., 2016), even when targeting a mode- covering divergence such as KL. Additionally, because it is intractable to compute the GAN training loss, and because approximate measures of performance such as Parzen window estimates suffer from major ï¬aws (Theis et al., 2016), evaluation of GAN performance is challenging. Currently, human judgement of sample quality is one of the leading metrics for evaluating GANs. In practice this metric does not take into account mode dropping if the number of modes is greater than the number of samples one is visualizing. In fact, the mode dropping problem generally helps visual sample quality as the model can choose to focus on only the most common modes. These common modes correspond, by deï¬nition, to more typical samples. Additionally, the generative model is able to allocate more expressive power to the modes it does cover than it would if it attempted to cover all modes.
1.2 DIFFERENTIATING THROUGH OPTIMIZATION | 1611.02163#5 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 5 | 1
System (SNES). Out of the ï¬ve SNES games we tested using state-of-the-art algorithms, only one was able to outperform an expert human player. As an additional feature, RLE supports research of multi-agent reinforcement learning (MARL) tasks (Bus¸oniu et al., 2010). We utilize this feature by training and evaluating the agents against each other, rather than against a pre-conï¬gured in-game AI. We conducted several experiments with this new feature and discovered that agents tend to learn how to overcome their current opponent rather than generalize the game being played. However, if an agent is trained against an ensemble of different opponents, its robustness increases. The main contributions of the paper are as follows:
⢠Introducing a novel RL environment with signiï¬cant challenges and an easy agent evalu- ation technique (enabling agents to compete against each other) which could lead to new and more advanced RL algorithms.
⢠A new method to train an agent by enabling it to train against several opponents, making the ï¬nal policy more robust.
⢠Encapsulating several different challenges to a single RL environment.
2 RELATED WORK
2.1 ARCADE LEARNING ENVIRONMENT | 1611.02205#5 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 5 | directly maximize the cumulative future returns with respect to the policy. While these algorithms can offer unbiased (or nearly unbiased, as discussed in Section 2.1) estimates of the gradient, they rely on Monte Carlo estimation and often suffer from high variance. To cope with high variance gradient estimates and difficult optimization landscapes, a number of techniques have been pro- posed, including constraining the change in the policy at each gradient step (Kakade, 2001; Peters et al., 2010) and mixing value-based back-ups to trade off bias and variance in Monte Carlo return estimates (Schulman et al., 2015). However, these methods all tend to require very large numbers of samples to deal with the high variance when estimating gradients of high-dimensional neural network policies. The crux of the problem with policy gradient methods is that they can only effec- tively use on-policy samples, which means that they require collecting large amounts of on-policy experiences after each parameter update to the policy. This makes them very sample intensive. Off- policy methods, such as Q-learning (Watkins & Dayan, 1992; Sutton et al., 1999; Mnih et al., 2015; Gu et al., 2016b) | 1611.02247#5 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 6 | Domain Speciï¬c Languages (DSLs). DSLs are programming languages that are suitable for a specialized domain but are more restrictive than full-featured programming languages. For exam- ple, one might disallow loops or other control ï¬ow, and only allow string data types and a small number of primitive operations like concatenation. Most of program synthesis research focuses on synthesizing programs in DSLs, because full-featured languages like C++ enlarge the search space and complicate synthesis. Restricted DSLs can also enable more efï¬cient special-purpose search algorithms. For example, if a DSL only allows concatenations of substrings of an input string, a dynamic programming algorithm can efï¬ciently search over all possible programs (Polozov & Gul- wani, 2015). The choice of DSL also affects the difï¬culty of the ranking problem. For example, in a DSL without if statements, the same algorithm is applied to all inputs, reducing the number of programs consistent with any set of input-output examples, and thus the ranking problem becomes easier. Of course, the restrictiveness of the chosen DSL also determines which problems the system can solve at all. | 1611.01989#6 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 6 | 1.2 DIFFERENTIATING THROUGH OPTIMIZATION
Many optimization schemes, including SGD, RMSProp (Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014), consist of a sequence of differentiable updates to parameters. Gradients can be backpropagated through unrolled optimization updates in a similar fashion to backpropagation through a recurrent neural network. The parameters output by the optimizer can thus be included, in a differentiable way, in another objective (Maclaurin et al., 2015). This idea was ï¬rst suggested for minimax problems in (Pearlmutter & Siskind, 2008), while (Zhang & Lesser, 2010) provided a theoretical analysis and experimental results on differentiating through a single step of gradient ascent for simple matrix games. Differentiating through unrolled optimization was ï¬rst scaled to deep networks in (Maclaurin et al., 2015), where it was used for hyperparameter optimization. More recently, (Belanger & McCallum, 2015; Han et al., 2016; Andrychowicz et al., 2016) backpropagate through optimization procedures in contexts unrelated to GANs or minimax games.
In this work we address the challenges of unstable optimization and mode collapse in GANs by unrolling optimization of the discriminator objective during training.
2 METHOD | 1611.02163#6 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 6 | ⢠Encapsulating several different challenges to a single RL environment.
2 RELATED WORK
2.1 ARCADE LEARNING ENVIRONMENT
The Arcade Learning Environment is a software framework designed for the development of RL algorithms, by playing Atari 2600 games. The interface provided by ALE allows the algorithms to select an action and receive the Atari screen and a reward in every step. The action is the equivalent to a humanâs joystick button combination and the reward is the difference between the scores at time stamp t and t â 1. The diversity of games for Atari provides a solid benchmark since different games have signiï¬cantly different goals. Atari 2600 has over 500 games, currently over 70 of them are implemented in ALE and are commonly used for algorithm comparison.
2.2 INFINITE MARIO | 1611.02205#6 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 6 | policy methods, such as Q-learning (Watkins & Dayan, 1992; Sutton et al., 1999; Mnih et al., 2015; Gu et al., 2016b) and off-policy actor-critic methods (Lever, 2014; Lillicrap et al., 2016), can in- stead use all samples, including off-policy samples, by adopting temporal difference learning with experience replay. Such methods are much more sample-efficient. However, convergence of these algorithms is in general not guaranteed with non-linear function approximators, and practical con- vergence and instability issues typically mean that extensive hyperparameter tuning is required to attain good results. | 1611.02247#6 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 7 | Search Techniques. There are many techniques for searching for programs consistent with input- output examples. Perhaps the simplest approach is to deï¬ne a grammar and then enumerate all derivations of the grammar, checking each one for consistency with the examples. This approach can be combined with pruning based on types and other logical reasoning (Feser et al., 2015). While simple, these approaches can be implemented efï¬ciently, and they can be surprisingly effective.
In restricted domains such as the concatenation example discussed above, special-purpose algo- rithms can be used. FlashMeta (Polozov & Gulwani, 2015) describes a framework for DSLs which allow decomposition of the search problem, e.g., where the production of an output string from an input string can be reduced to ï¬nding a program for producing the ï¬rst part of the output and concatenating it with a program for producing the latter part of the output string.
Another class of systems is based on Satisï¬ability Modulo Theories (SMT) solving. SMT com- bines SAT-style search with theories like arithmetic and inequalities, with the beneï¬t that theory- dependent subproblems can be handled by special-purpose solvers. For example, a special-purpose
2
Published as a conference paper at ICLR 2017 | 1611.01989#7 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 7 | In this work we address the challenges of unstable optimization and mode collapse in GANs by unrolling optimization of the discriminator objective during training.
2 METHOD
2.1 GENERATIVE ADVERSARIAL NETWORKS
The GAN learning problem is to ï¬nd the optimal parameters θâ in a minimax objective, G for a generator function G (z; θG)
θâ G = argmin θG f (θG, θD) (1)
# max θD f (θG, θâ
= argmin θG D (θG)) (2)
θâ D (θG) = argmax f (θG, θD) , θD (3)
where f is commonly chosen to be | 1611.02163#7 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 7 | 2.2 INFINITE MARIO
Inï¬nite Mario (Togelius et al., 2009) is a remake of the classic Super Mario game in which levels are randomly generated. On these levels the Mario AI Competition was held. During the competition, several algorithms were trained on Inï¬nite Mario and their performances were measured in terms of the number of stages completed. As opposed to ALE, training is not based on the raw screen data but rather on an indication of Marioâs (the playerâs) location and objects in its surrounding. This environment no longer poses a challenge for state of the art algorithms. Its main shortcoming lie in the fact that it provides only a single game to be learnt. Additionally, the environment provides hand-crafted features, extracted directly from the simulator, to the algorithm. This allowed the use of planning algorithms that highly outperform any learning based algorithm.
2.3 OPENAI GYM | 1611.02205#7 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 7 | In order to make deep reinforcement learning practical as a tool for tackling real-world tasks, we must develop methods that are both data efficient and stable. In this paper, we propose Q-Prop, a step in this direction that combines the advantages of on-policy policy gradient methods with the effi- ciency of off-policy learning. Unlike prior approaches for off-policy learning, which either introduce bias (Sutton et al., 1999; Silver et al., 2014) or increase variance (Precup, 2000; Levine & Koltun, 2013; Munos et al., 2016), Q-Prop can reduce the variance of gradient estimator without adding bias; unlike prior approaches for critic-based variance reduction (Schulman et al., 2016) which fit the value function on-policy, Q-Prop learns the action-value function off-policy. The core idea is to use the first-order Taylor expansion of the critic as a control variate, resulting in an analytical gradient term through the critic and a Monte Carlo policy gradient term consisting of the residuals in advantage approximations. The method helps unify policy gradient and actor-critic methods: it can be seen as using the off-policy critic to reduce variance in | 1611.02247#7 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 8 | 2
Published as a conference paper at ICLR 2017
solver can easily ï¬nd integers x, y such that x < y and y < â100 hold, whereas an enumera- tion strategy may need to consider many values before satisfying the constraints. Many program synthesis engines based on SMT solvers exist, e.g., Sketch (Solar-Lezama, 2008) and Brahma (Gul- wani et al., 2011). They convert the semantics of a DSL into a set of constraints between variables representing the program and the input-output values, and then call an SMT solver to ï¬nd a satis- fying setting of the program variables. This approach shines when special-purpose reasoning can be leveraged, but complex DSLs can lead to very large constraint problems where constructing and manipulating the constraints can be a lot slower than an enumerative approach.
Finally, stochastic local search can be employed to search over program space, and there is a long history of applying genetic algorithms to this problem. One of the most successful recent examples is the STOKE super-optimization system (Schkufza et al., 2016), which uses stochastic local search to ï¬nd assembly programs that have the same semantics as an input program but execute faster. | 1611.01989#8 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.