id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1611.05763#51
Learning to reinforcement learn
Emilie Kaufmann, Olivier Cappé, and Aurélien Garivier. On bayesian upper conï¬ dence bounds for bandit problems. In Proc. of Intâ l Conf. on Artiï¬ cial Intelligence and Statistics, AISTATS, 2012a. Emilie Kaufmann, Nathaniel Korda, and Rémi Munos. Thompson sampling: An asymptotically optimal ï¬ nite-time analysis. In Algorithmic Learning Theory - 23rd International Conference, pages 199â 213, 2012b. Mehdi Khamassi, Stéphane Lallée, Pierre Enel, Emmanuel Procyk, and Peter F Dominey.
1611.05763#50
1611.05763#52
1611.05763
[ "1611.01578" ]
1611.05763#52
Learning to reinforcement learn
Robot cognitive control with a neurophysiologically inspired reinforcement learning model. Frontiers in neurorobotics, 5:1, 2011. Mehdi Khamassi, Pierre Enel, Peter Ford Dominey, and Emmanuel Procyk. Medial prefrontal cortex and the adaptive regulation of reinforcement learning parameters. Prog Brain Res, 202:441â 464, 2013. Kunikazu Kobayashi, Hiroyuki Mizoue, Takashi Kuremoto, and Masanao Obayashi.
1611.05763#51
1611.05763#53
1611.05763
[ "1611.01578" ]
1611.05763#53
Learning to reinforcement learn
A meta-learning method based on temporal difference error. In International Conference on Neural Information Processing, pages 530â 537. Springer, 2009. Wouter Kool, Fiery A Cushman, and Samuel J Gershman. When does model-based control pay off? PLoS Comput Biol, 12(8):e1005090, 2016. Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. arXiv preprint arXiv:1604.00289, 2016. Tor Lattimore and Rémi Munos.
1611.05763#52
1611.05763#54
1611.05763
[ "1611.01578" ]
1611.05763#54
Learning to reinforcement learn
Bounded regret for ï¬ nite-armed structured bandits. In Advances in Neural Information Processing Systems 27, pages 550â 558, 2014. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436â 444, 2015. Daeyeol Lee and Xiao-Jing Wang. Mechanisms for stochastic decision making in the primate frontal cortex: Single-neuron recording and circuit modeling. Neuroeconomics: Decision making and the brain, pages 481â
1611.05763#53
1611.05763#55
1611.05763
[ "1611.01578" ]
1611.05763#55
Learning to reinforcement learn
501, 2009. Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016. Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andy Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, and Raia Hadsell. Learning to navigate in complex environments. arXiv preprint arXiv:1611.03673, 2016. URL http://arxiv.org/abs/1611. 03673. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, et al. Human-level control through deep reinforcement learning. Nature, 518:529â 533, 2015. Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu.
1611.05763#54
1611.05763#56
1611.05763
[ "1611.01578" ]
1611.05763#56
Learning to reinforcement learn
Asynchronous methods for deep reinforcement learning. In Proc. of Intâ l Conf. on Machine Learning, ICML, 2016. 16 Danil V Prokhorov, Lee A Feldkamp, and Ivan Yu Tyukin. Adaptive behavior with ï¬ xed weights in rnn: an overview. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN), pages 2018â 2023, 2002. Robert A Rescorla, Allan R Wagner, et al. A theory of pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. Classical conditioning II: Current research and theory, 2:64â 99, 1972.
1611.05763#55
1611.05763#57
1611.05763
[ "1611.01578" ]
1611.05763#57
Learning to reinforcement learn
Dan Russo and Benjamin Van Roy. Learning to optimize via information-directed sampling. In Advances in Neural Information Processing Systems 27, pages 1583â 1591, 2014. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In Proceedings of The 33rd International Conference on Machine Learning, pages 1842â 1850, 2016. Jurgen Schmidhuber, Jieyu Zhao, and Marco Wiering.
1611.05763#56
1611.05763#58
1611.05763
[ "1611.01578" ]
1611.05763#58
Learning to reinforcement learn
Simple principles of metalearning. Technical report, SEE, 1996. Nicolas Schweighofer and Kenji Doya. Meta-learning in reinforcement learning. Neural Networks, 16(1):5â 9, 2003. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, et al. Mastering the game of go with deep neural networks and tree search.
1611.05763#57
1611.05763#59
1611.05763
[ "1611.01578" ]
1611.05763#59
Learning to reinforcement learn
Nature, 529(7587): 484â 489, 2016. David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz, Andre Barreto, and Thomas Degris. The predictron: End-to-end learning and planning. Submitted to Intâ l Conference on Learning Representations, ICLR, 2017. Alireza Soltani, Daeyeol Lee, and Xiao-Jing Wang.
1611.05763#58
1611.05763#60
1611.05763
[ "1611.01578" ]
1611.05763#60
Learning to reinforcement learn
Neural mechanism for stochastic behaviour during a competitive game. Neural Networks, 19(8):1075â 1090, 2006. Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value iteration networks. arXiv preprint arXiv:1602.02867v2, 2016. William R Thompson.
1611.05763#59
1611.05763#61
1611.05763
[ "1611.01578" ]
1611.05763#61
Learning to reinforcement learn
On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25:285â 294, 1933. Sebastian Thrun and Lorien Pratt. Learning to learn: Introduction and overview. In Learning to learn, pages 3â 17. Springer, 1998. Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Joel Leibo, Hubert Soyer, Dharshan Kumaran, and Matthew Botvinick. Meta-reinforcement learning: a bridge between prefrontal and dopaminergic function. In Cosyne Abstracts, 2017. Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014. A Steven Younger, Peter R Conwell, and Neil E Cotter. Fixed-weight on-line learning. IEEE Transactions on Neural Networks, 10(2):272â 283, 1999. Barret Zoph and Quoc V Le.
1611.05763#60
1611.05763#62
1611.05763
[ "1611.01578" ]
1611.05763#62
Learning to reinforcement learn
Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016. 17
1611.05763#61
1611.05763
[ "1611.01578" ]
1611.05397#0
Reinforcement Learning with Unsupervised Auxiliary Tasks
6 1 0 2 v o N 6 1 ] G L . s c [ 1 v 7 9 3 5 0 . 1 1 6 1 : v i X r a # REINFORCEMENT LEARNING WITH UNSUPERVISED AUXILIARY TASKS Max Jaderbergâ , Volodymyr Mnih*, Wojciech Marian Czarnecki* Tom Schaul, Joel Z Leibo, David Silver & Koray Kavukcuoglu DeepMind London, UK jaderberg,vmnih,lejlot,schaul,jzl,davidsilver,korayk { # korayk}@google.com } # ABSTRACT
1611.05397#1
1611.05397
[ "1605.02097" ]
1611.05397#1
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by di- rectly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by rein- forcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon ex- trinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task.
1611.05397#0
1611.05397#2
1611.05397
[ "1605.02097" ]
1611.05397#2
Reinforcement Learning with Unsupervised Auxiliary Tasks
Our agent signiï¬ cantly outperforms the previous state-of-the- art on Atari, averaging 880% expert human performance, and a challenging suite of ï¬ rst-person, three-dimensional Labyrinth tasks leading to a mean speedup in learning of 10 à Natural and artiï¬ cial agents live in a stream of sensorimotor data. At each time step t, the agent receives observations ot and executes actions at. These actions inï¬ uence the future course of the sensorimotor stream. In this paper we develop agents that learn to predict and control this stream, by solving a host of reinforcement learning problems, each focusing on a distinct feature of the sensorimotor stream. Our hypothesis is that an agent that can ï¬ exibly control its future experiences will also be able to achieve any goal with which it is presented, such as maximising its future rewards. The classic reinforcement learning paradigm focuses on the maximisation of extrinsic reward. How- ever, in many interesting domains, extrinsic rewards are only rarely observed. This raises questions of what and how to learn in their absence. Even if extrinsic rewards are frequent, the sensorimotor stream contains an abundance of other possible learning targets. Traditionally, unsupervised learn- ing attempts to reconstruct these targets, such as the pixels in the current or subsequent frame. It is typically used to accelerate the acquisition of a useful representation. In contrast, our learning objective is to predict and control features of the sensorimotor stream, by treating them as pseudo- rewards for reinforcement learning. Intuitively, this set of tasks is more closely matched with the agentâ s long-term goals, potentially leading to more useful representations. Consider a baby that learns to maximise the cumulative amount of red that it observes. To correctly predict the optimal value, the baby must understand how to increase â rednessâ by various means, including manipulation (bringing a red object closer to the eyes); locomotion (moving in front of a red object); and communication (crying until the parents bring a red object). These behaviours are likely to recur for many other goals that the baby may subsequently encounter. No understanding of these behaviours is required to simply reconstruct the redness of current or subsequent images. Our architecture uses reinforcement learning to approximate both the optimal policy and optimal value function for many different pseudo-rewards. It also makes other auxiliary predictions that serve to focus the agent on important aspects of the task.
1611.05397#1
1611.05397#3
1611.05397
[ "1605.02097" ]
1611.05397#3
Reinforcement Learning with Unsupervised Auxiliary Tasks
These include the long-term goal of predicting cumulative extrinsic reward as well as short-term predictions of extrinsic reward. To learn more efï¬ ciently, our agents use an experience replay mechanism to provide additional updates â Joint ï¬ rst authors. Ordered alphabetically by ï¬ rst name. 1 Agent LSTM (a) Base A3C Agent R+â â R«â â _R Agent ConvNet 4 4 4 V0 Aux FC net â T Â¥ - â FE Aux DeConvNet â â 4 4 > - - y yoy y O Iâ -Oâ -0â O Replay Buffer Environment * > ral r 0 0 â - : â (d) Value Function Replay rr =) tr-3 tro t-1 (c) Reward Prediction trai tre â a qus . / (b) Pixel Control Figure 1:
1611.05397#2
1611.05397#4
1611.05397
[ "1605.02097" ]
1611.05397#4
Reinforcement Learning with Unsupervised Auxiliary Tasks
Overview of the UNREAL agent. (a) The base agent is a CNN-LSTM agent trained on-policy with the A3C loss (Mnih et al., 2016). Observations, rewards, and actions are stored in a small replay buffer which encapsulates a short history of agent experience. This experience is used by auxiliary learning tasks. (b) Pixel Control â auxiliary policies Qaux are trained to maximise change in pixel intensity of different regions of the input. The agent CNN and LSTM are used for this task along with an auxiliary deconvolution network. This auxiliary control task requires the agent to learn how to control the environment. (c) Reward Prediction â given three recent frames, the network must predict the reward that will be obtained in the next unobserved timestep. This task network uses instances of the agent CNN, and is trained on reward biased sequences to remove the perceptual sparsity of rewards. (d) Value Function Replay â further training of the value function using the agent network is performed to promote faster value iteration. Further visualisation of the agent can be found in https://youtu.be/Uz-zGYrYEjA
1611.05397#3
1611.05397#5
1611.05397
[ "1605.02097" ]
1611.05397#5
Reinforcement Learning with Unsupervised Auxiliary Tasks
to the critics. Just as animals dream about positively or negatively rewarding events more frequently (Schacter et al., 2012), our agents preferentially replay sequences containing rewarding events. Importantly, both the auxiliary control and auxiliary prediction tasks share the convolutional neural network and LSTM that the base agent uses to act. By using this jointly learned representation, the base agent learns to optimise extrinsic reward much faster and, in many cases, achieves better policies at the end of training. This paper brings together the state-of-the-art Asynchronous Advantage Actor-Critic (A3C) frame- work (Mnih et al., 2016), outlined in Section 2, with auxiliary control tasks and auxiliary reward tasks, deï¬
1611.05397#4
1611.05397#6
1611.05397
[ "1605.02097" ]
1611.05397#6
Reinforcement Learning with Unsupervised Auxiliary Tasks
ned in sections Section 3.1 and Section 3.2 respectively. These auxiliary tasks do not re- quire any extra supervision or signals from the environment than the vanilla A3C agent. The result is our UNsupervised REinforcement and Auxiliary Learning (UNREAL) agent (Section 3.4) In Section 4 we apply our UNREAL agent to a challenging set of 3D-vision based domains known as the Labyrinth (Mnih et al., 2016), learning solely from the raw RGB pixels of a ï¬ rst-person view. Our agent signiï¬ cantly outperforms the baseline agent using vanilla A3C, even when the baseline was augmented with an unsupervised reconstruction loss, in terms of speed of learning, robustness to hyperparameters, and ï¬ nal performance. The result is an agent which on average achieves 87% of expert human-normalised score, compared to 54% with A3C, and on average 10 faster than A3C. Our UNREAL agent also signiï¬ cantly outperforms the previous state-of-the-art in the Atari domain. # 1 RELATED WORK A variety of reinforcement learning architectures have focused on learning temporal abstractions, such as options (Sutton et al., 1999b), with policies that may maximise pseudo-rewards (Konidaris & Barreto, 2009; Silver & Ciosek, 2012). The emphasis here has typically been on the development of temporal abstractions that facilitate high-level learning and planning. In contrast, our agents do not make any direct use of the pseudo-reward maximising policies that they learn (although this is
1611.05397#5
1611.05397#7
1611.05397
[ "1605.02097" ]
1611.05397#7
Reinforcement Learning with Unsupervised Auxiliary Tasks
2 an interesting direction for future research). Instead, they are used solely as auxiliary objectives for developing a more effective representation. The Horde architecture (Sutton et al., 2011) also applied reinforcement learning to identify value functions for a multitude of distinct pseudo-rewards. However, this architecture was not used for representation learning; instead each value function was trained separately using distinct weights. The UVFA architecture (Schaul et al., 2015a) is a factored representation of a continuous set of optimal value functions, combining features of the state with an embedding of the pseudo-reward function. Initial work on UVFAs focused primarily on architectural choices and learning rules for these continuous embeddings. A pre-trained UVFA representation was successfully transferred to novel pseudo-rewards in a simple task. Similarly, the successor representation (Dayan, 1993; Barreto et al., 2016; Kulkarni et al., 2016) factors a continuous set of expected value functions for a ï¬ xed policy, by combining an expectation over features of the state with an embedding of the pseudo-reward function. Successor representa- tions have been used to transfer representations from one pseudo-reward to another (Barreto et al., 2016) or to different scales of reward (Kulkarni et al., 2016). Another, related line of work involves learning models of the environment (Schmidhuber, 2010; Xie et al., 2015; Oh et al., 2015). Although learning environment models as auxiliary tasks could improve RL agents (e.g. Lin & Mitchell (1992); Li et al. (2015)), this has not yet been shown to work in rich visual environments. More recently, auxiliary predictions tasks have been studied in 3D reinforcement learning environ- ments. Lample & Chaplot (2016) showed that predicting internal features of the emulator, such as the presence of an enemy on the screen, is beneï¬
1611.05397#6
1611.05397#8
1611.05397
[ "1605.02097" ]
1611.05397#8
Reinforcement Learning with Unsupervised Auxiliary Tasks
cial. Mirowski et al. (2016) study auxiliary prediction of depth in the context of navigation. # 2 BACKGROUND We assume the standard reinforcement learning setting where an agent interacts with an environment over a number of discrete time steps. At time t the agent receives an observation 0, along with a reward r; and produces an action a;. The agentâ s state s, is a function of its experience up until time t, 5, = f(01,71, 41, -.., 04,74). The n-step return Ry... at time t is defined as the discounted sum of rewards, Ri:t+n = 0), 7/'r4i. The value function is the expected return from state s, V"â ¢(s) = E[Reco|se = 8,7], when actions are selected accorded to a policy 7(a|s). The action- value function Q*(s,a) = E[Rz:co |S: = $,a, = a, 7] is the expected return following action a from state s. Value-based reinforcement learning algorithms, such as Q-learning 1989), or its deep learning instantiations DQN (Mnih et al and asynchronous Q-learning (Mnih et al.|{2016), approximate the action-value function Q(s,a;@) using parameters 0, and then update parameters to minimise the mean-squared error, for example by optimising an n-step lookahead loss Lg =E [Retin +7â maxy Q(sâ ,aâ ';0â ) â Q(s, a; 6)"; where @~ are previous parameters and the optimisation is with respect to 0. Policy gradient algorithms adjust the policy to maximise the expected reward, L, = â Es. 7 [Ri:c0], using the gradient Paver Mace] = E[% log r(als)(Q"(s,a) â V(s))] e [1999ap; in practice the true value functions Q7 and Vâ ¢ are substituted with approxima- tions. The Asynchronous Advantage Actor-Critic (A3C) algorithm (Mnih et al.| constructs an approximation to both the policy 7(a|s,@) and the value function V(s, 0) using parameters 0.
1611.05397#7
1611.05397#9
1611.05397
[ "1605.02097" ]
1611.05397#9
Reinforcement Learning with Unsupervised Auxiliary Tasks
Both policy and value are adjusted towards an n-step lookahead value, Rp-tin + Y"V (St4n41,9), using an entropy regularisation penalty, Lasc + Lyr + La â Esxn [oH (7 (s,-,9)], where Lyn = Ear [(Reten +9"V (Sr4ners8-) ~ V(s0.8))"]- In A3C many instances of the agent interact in parallel with many instances of the environment, which both accelerates and stabilises learning. The A3C agent architecture we build on uses an LSTM to jointly approximate both policy 7 and value function V, given the entire history of expe- rience as inputs (see Figure[I](a)). 3 # 3 AUXILIARY TASKS FOR REINFORCEMENT LEARNING In this section we incorporate auxiliary tasks into the reinforcement learning framework in order to promote faster training, more robust learning, and ultimately higher performance for our agents. Section 3.1 introduces the use of auxiliary control tasks, Section 3.2 describes the addition of reward focussed auxiliary tasks, and Section 3.4 describes the complete UNREAL agent combining these auxiliary tasks. 3.1 AUXILIARY CONTROL TASKS The auxiliary control tasks we consider are deï¬ ned as additional pseudo-reward functions in the environment the agent is interacting with. We formally deï¬ ne an auxiliary control task c by a reward R, where function r(c) : is the space of possible states and is the space of available includes both the history of observations and rewards as well actions. The underlying state space as the state of the agent itself, i.e. the activations of the hidden units of the network. Given a set of auxiliary control tasks and let Ï be the agentâ s policy on the base task. The overall objective is to maximise total performance across all these auxiliary tasks, EÏ c[R(c) 1: â EÏ [R1: arg max θ ] + λc ], â c (1) # â C R{,, = _, where, R(c) is the discounted return for auxiliary reward r(c), and θ is the set of parameters of Ï and all Ï (c)â s. By sharing some of the parameters of Ï and all Ï
1611.05397#8
1611.05397#10
1611.05397
[ "1605.02097" ]
1611.05397#10
Reinforcement Learning with Unsupervised Auxiliary Tasks
(c) the agent must balance improving its performance with respect to the global reward rt with improving performance on the auxiliary tasks. In principle, any reinforcement learning method could be applied to maximise these objectives. However, to efficiently learn to maximise many different pseudo-rewards simultaneously in par- allel from a single stream of experience, it is necessary to use off-policy reinforcement learn- ing. We focus on value-based RL methods that approximate the optimal action-values by Q- learning. Specifically, for each control task c we optimise an n-step Q-learning loss Ly = E [(Revin 47" maxar QO(s',aâ
1611.05397#9
1611.05397#11
1611.05397
[ "1605.02097" ]
1611.05397#11
Reinforcement Learning with Unsupervised Auxiliary Tasks
,6-) â QO (s,a, 6))â |. as described in (2016). While many types of auxiliary reward functions can be deï¬ ned from these quantities we focus on two speciï¬ c types: # e # e Pixel changes - Changes in the perceptual stream often correspond to important events in an environment. We train agents that learn a separate policy for maximally changing the pixels in each cell of an n n non-overlapping grid placed over the input image. We refer to these auxiliary tasks as pixel control. See Section 4 for a complete description. Network features - Since the policy or value networks of an agent learn to extract task- relevant high-level features of the environment (Mnih et al., 2015; Zahavy et al., 2016; Silver et al., 2016) they can be useful quantities for the agent to learn to control. Hence, the activation of any hidden unit of the agentâ s neural network can itself be an auxiliary reward. We train agents that learn a separate policy for maximally activating each of the units in a speciï¬ c hidden layer. We refer to these tasks as feature control. The Figure 1 (b) shows an A3C agent architecture augmented with a set of auxiliary pixel control tasks. In this case, the base policy Ï shares both the convolutional visual stream and the LSTM with n tensor Qaux the auxiliary policies. The output of the auxiliary network head is an Nact à where Qaux(a, i, j) represents the networkâ s current estimate of the optimal discounted expected change in cell (i, j) of the input after taking action a. We exploit the spatial nature of the auxiliary tasks by using a deconvolutional neural network to produce the auxiliary values Qaux. 3.2 AUXILIARY REWARD TASKS In addition to learning generally about the dynamics of the environment, an agent must learn to maximise the global reward stream. To learn a policy to maximise rewards, an agent requires features
1611.05397#10
1611.05397#12
1611.05397
[ "1605.02097" ]
1611.05397#12
Reinforcement Learning with Unsupervised Auxiliary Tasks
4 Agent Input nav_maze_all_random_02 samples Figure 2: The raw RGB frame from the environment is the observation that is given as input to the agent, along with the last action and reward. This observation is shown for a sample of a maze from the nav maze all random 02 level in Labyrinth. The agent must navigate this unseen maze and pick up apples giving +1 reward and reach the goal giving +10 reward, after which it will respawn. Top down views of samples from this maze generator show the variety of mazes procedurally created. A video showing the agent playing Labyrinth levels can be viewed at https://youtu.be/Uz-zGYrYEjA
1611.05397#11
1611.05397#13
1611.05397
[ "1605.02097" ]
1611.05397#13
Reinforcement Learning with Unsupervised Auxiliary Tasks
that recognise states that lead to high reward and value. An agent with a good representation of rewarding states, will allow the learning of good value functions, and in turn should allow the easy learning of a policy. However, in many interesting environments reward is encountered very sparsely, meaning that it can take a long time to train feature extractors adept at recognising states which signify the onset of reward. We want to remove the perceptual sparsity of rewards and rewarding states to aid the training of an agent, but to do so in a way which does not introduce bias to the agentâ
1611.05397#12
1611.05397#14
1611.05397
[ "1605.02097" ]
1611.05397#14
Reinforcement Learning with Unsupervised Auxiliary Tasks
s policy. To do this, we introduce the auxiliary task of reward prediction â that of predicting the onset of immediate reward given some historical context. This task consists of processing a sequence of consecutive observations, and requiring the agent to predict the reward picked up in the subsequent unseen frame. This is similar to value learning focused on immediate reward (γ = 0). Unlike learning a value function, which is used to estimate returns and as a baseline while learning a policy, the reward predictor is not used for anything other than shaping the features of the agent. This keeps us free to bias the data distribution, therefore biasing the reward predictor and feature shaping, without biasing the value function or policy. We train the reward prediction task on sequences S; = (874%, 8râ
1611.05397#13
1611.05397#15
1611.05397
[ "1605.02097" ]
1611.05397#15
Reinforcement Learning with Unsupervised Auxiliary Tasks
k41,---;87â 1) to predict the reward r-, and sample S, from the experience of our policy 7 in a skewed manner so as to over- represent rewarding events (presuming rewards are sparse within the environment). Specifically, we sample such that zero rewards and non-zero rewards are equally represented, i.e. the predicted probability of a non-zero reward is P(r, # 0) = 0.5. The reward prediction is trained to minimise a loss Lpp. In our experiments we use a multiclass cross-entropy classification loss across three classes (zero, positive, or negative reward), although a mean-squared error loss is also feasible. The auxiliary reward predictions may use a different architecture to the agentâ
1611.05397#14
1611.05397#16
1611.05397
[ "1605.02097" ]
1611.05397#16
Reinforcement Learning with Unsupervised Auxiliary Tasks
s main policy. Rather than simply â hangingâ the auxiliary predictions off the LSTM, we use a simpler feedforward net- work that concatenates a stack of states SÏ after being encoded by the agentâ s CNN, see Figure 1 (c). The idea is to simplify the temporal aspects of the prediction task in both the future direction (focus- ing only on immediate reward prediction rather than long-term returns) and past direction (focusing only on immediate predecessor states rather than the complete history); the features discovered in this manner is shared with the primary LSTM (via shared weights in the convolutional encoder) to enable the policy to be learned more efï¬ ciently. 3.3 EXPERIENCE REPLAY Experience replay has proven to be an effective mechanism for improving both the data efï¬ ciency and stability of deep reinforcement learning algorithms (Mnih et al., 2015). The main idea is to store transitions in a replay buffer, and then apply learning updates to sampled transitions from this buffer. Experience replay provides a natural mechanism for skewing the distribution of reward predic- tion samples towards rewarding events: we simply split the replay buffer into rewarding and non- rewarding subsets, and replay equally from both subsets. The skewed sampling of transitions from
1611.05397#15
1611.05397#17
1611.05397
[ "1605.02097" ]
1611.05397#17
Reinforcement Learning with Unsupervised Auxiliary Tasks
5 a replay buffer means that rare rewarding states will be oversampled, and learnt from far more fre- quently than if we sampled sequences directly from the behaviour policy. This approach can be viewed as a simple form of prioritised replay (Schaul et al., 2015b). In addition to reward prediction, we also use the replay buffer to perform value function replay. This amounts to resampling recent historical sequences from the behaviour policy distribution and performing extra value function regression in addition to the on-policy value function regression in A3C. By resampling previous experience, and randomly varying the temporal position of the truncation window over which the n-step return is computed, value function replay performs value iteration and exploits newly discovered features shaped by reward prediction.
1611.05397#16
1611.05397#18
1611.05397
[ "1605.02097" ]
1611.05397#18
Reinforcement Learning with Unsupervised Auxiliary Tasks
We do not skew the distribution for this case. Experience replay is also used to increase the efï¬ ciency and stability of the auxiliary control tasks. Q-learning updates are applied to sampled experiences that are drawn from the replay buffer, allow- ing features to be developed extremely efï¬ ciently. 3.4 UNREAL AGENT The UNREAL algorithm combines the beneï¬ ts of two separate, state-of-the-art approaches to deep reinforcement learning. The primary policy is trained with A3C (Mnih et al., 2016): it learns from parallel streams of experience to gain efï¬ ciency and stability; it is updated online using policy gra- dient methods; and it uses a recurrent neural network to encode the complete history of experience. This allows the agent to learn effectively in partially observed environments. The auxiliary tasks are trained on very recent sequences of experience that are stored and randomly sampled; these sequences may be prioritised (in our case according to immediate rewards) (Schaul et al., 2015b); these targets are trained off-policy by Q-learning; and they may use simpler feedfor- ward architectures. This allows the representation to be trained with maximum efï¬ ciency. The UNREAL algorithm optimises a single combined loss function with respect to the joint param- eters of the agent, θ, that combines the A3C loss LPC, auxiliary reward prediction loss LRP and replayed value loss LA3C + λVRLVR + λPC Lvyr, (c) UNREAL(θ) = Q + λRPLRP L L c (2) where λVR, λPC, λRP are weighting terms on the individual loss components. In practice, the loss is broken down into separate components that are applied either on-policy, LA3C is directly from experience; or off-policy, on replayed transitions.
1611.05397#17
1611.05397#19
1611.05397
[ "1605.02097" ]
1611.05397#19
Reinforcement Learning with Unsupervised Auxiliary Tasks
Speciï¬ cally, the A3C loss LVR is optimised from replayed data, in addition minimised on-policy; while the value function loss to the A3C loss (of which it is one component, see Section 2). The auxiliary control loss LPC is optimised off-policy from replayed data, by n-step Q-learning. Finally, the reward loss LRP is optimised from rebalanced replay data. # 4 EXPERIMENTS In this section we give the results of experiments performed on the 3D environment Labyrinth in Section 4.1 and Atari in Section 4.2. In all our experiments we used an A3C CNN-LSTM agent as our baseline and the UNREAL agent along with its ablated variants added auxiliary outputs and losses to this base agent. The agent is trained on-policy with 20-step returns and the auxiliary tasks are performed every 20 environment steps, corresponding to every update of the base A3C agent. The replay buffer stores the most recent 2k observations, actions, and rewards taken by the base agent. In Labyrinth we use the same set of 17 discrete actions for all games and on Atari the action set is game dependent (between 3 and 18 discrete actions). The full implementation details can be found in Section B. 4.1 LABYRINTH RESULTS Labyrinth is a ï¬
1611.05397#18
1611.05397#20
1611.05397
[ "1605.02097" ]
1611.05397#20
Reinforcement Learning with Unsupervised Auxiliary Tasks
rst-person 3D game platform extended from OpenArena (contributors, 2005), which is itself based on Quake3 (id software, 1999). Labyrinth is comparable to other ï¬ rst-person 3D game 6 Labyrinth Performance # Labyrinth Robustness Atari Performance Atari Robustness Figure 3: An overview of performance averaged across all levels on Labyrinth (Top) and Atari (Bottom). In the ablated versions RP is reward prediction, VR is value function replay, and PC is pixel control, with the UNREAL agent being the combination of all. Left: The mean human-normalised performance over last 100 episodes of the top-3 jobs at every point in training. We achieve an average of 87% human-normalised score, with every element of the agent improving upon the 54% human-normalised score of vanilla A3C.
1611.05397#19
1611.05397#21
1611.05397
[ "1605.02097" ]
1611.05397#21
Reinforcement Learning with Unsupervised Auxiliary Tasks
Right: The ï¬ nal human-normalised score of every job in our hyperparameter sweep, sorted by score. On both Labyrinth and Atari, the UNREAL agent increases the robustness to the hyperparameters (namely learning rate and entropy cost). platforms for AI research like VizDoom (Kempka et al., 2016) or Minecraft (Tessler et al., 2016). However, in comparison, Labyrinth has considerably richer visuals and more realistic physics. Tex- tures in Labyrinth are often dynamic (animated) so as to convey a game world where walls and ï¬ oors shimmer and pulse, adding signiï¬ cant complexity to the perceptual task. The action space allows for ï¬ ne-grained pointing in a fully 3D world. Unlike in VizDoom, agents can look up to the sky or down to the ground. Labyrinth also supports continuous motion unlike the Minecraft platform of (Oh et al., 2016), which is a 3D grid world. We evaluated agent performance on 13 Labyrinth levels that tested a range of different agent abilities. A top-down visualization showing the layout of each level can be found in Figure 7 of the Appendix. A gallery of example images from the ï¬ rst-person perspective of the agent are in Figure 8 of the Appendix. The levels can be divided into four categories:
1611.05397#20
1611.05397#22
1611.05397
[ "1605.02097" ]
1611.05397#22
Reinforcement Learning with Unsupervised Auxiliary Tasks
and a stairway to melon 01). The goal of these levels is to collect apples (small positive reward) and melons (large positive reward) while avoiding lemons (small negative reward). and 2. Navigation levels with a 1, 2, 3 { the agentâ s ability to ï¬ nd ). nav maze random goal 0 1, 2, 3 } { their way to a goal in a ï¬ xed maze that remains the same across episodes. The starting location is random. In this case, agents could encode the structure of the maze in network weights. In the random goal variant, the location of the goal changes in every episode.
1611.05397#21
1611.05397#23
1611.05397
[ "1605.02097" ]
1611.05397#23
Reinforcement Learning with Unsupervised Auxiliary Tasks
The optimal policy is to ï¬ nd the goalâ s location at the start of each episode and then use long-term knowledge of the maze layout to return to it as quickly as possible from any location. The static variant is simpler in that the goal location is always ï¬ xed for all episodes and only the agentâ s starting location changes so the optimal policy does not require the ï¬ rst step of exploring to ï¬ nd the current goal location. 3. Procedurally-generated navigation levels requiring effective exploration of a new maze ). These generated on-the-ï¬ y at the start of each episode (nav maze all random 0 1, 2, 3 } { levels test the agentâ s ability to effectively explore a totally new environment. The optimal 7 policy would begin by exploring the maze to rapidly learn its layout and then exploit that knowledge to repeatedly return to the goal as many times as possible before the end of the episode (between 60 and 300 seconds). 4. Laser-tag levels requiring agents to wield laser-like science ï¬ ction gadgets to tag bots con- trolled by the gameâ s in-built AI (lt horse shoe color and lt hallway slope). A reward of 1 is delivered whenever the agent tags a bot by reducing its shield to 0. These levels approximate the default OpenArena/Quake3 gameplay mode. In lt hallway slope there is a sloped arena, requiring the agent to look up and down. In lt horse shoe color, the colors and textures of the bots are randomly generated at the start of each episode. This prevents agents from relying on color for bot detection.
1611.05397#22
1611.05397#24
1611.05397
[ "1605.02097" ]
1611.05397#24
Reinforcement Learning with Unsupervised Auxiliary Tasks
These levels test aspects of ï¬ ne-control (for aiming), planning (to anticipate where bots are likely to move), strategy (to control key areas of the map such as gadget spawn points), and robustness to the substantial vi- sual complexity arising from the large numbers of independently moving objects (gadget projectiles and bots). 4.1.1 RESULTS We compared the full UNREAL agent to a basic A3C LSTM agent along with several ablated versions of UNREAL with different components turned off.
1611.05397#23
1611.05397#25
1611.05397
[ "1605.02097" ]
1611.05397#25
Reinforcement Learning with Unsupervised Auxiliary Tasks
A video of the ï¬ nal agent perfor- mance, as well as visualisations of the activations and auxiliary task outputs can be viewed at https://youtu.be/Uz-zGYrYEjA. Figure 3 (right) shows curves of mean human-normalised scores over the 13 Labyrinth levels. Adding each of our proposed auxiliary tasks to an A3C agent substantially improves the perfor- mance. Combining different auxiliary tasks leads to further improvements over the individual auxil- iary tasks. The UNREAL agent, which combines all three auxiliary tasks, achieves more than twice the ï¬ nal human-normalised mean performance of A3C, increasing from 54% to 87% (45% to 92% for median performance). This includes a human-normalised score of 116% on lt hallway slope and 100% on nav maze random goal 02.
1611.05397#24
1611.05397#26
1611.05397
[ "1605.02097" ]
1611.05397#26
Reinforcement Learning with Unsupervised Auxiliary Tasks
Perhaps of equal importance, aside from ï¬ nal performance on the games, UNREAL is signiï¬ cantly faster at learning and therefore more data efï¬ cient, achieving a mean speedup of the number of steps to reach A3C best performance of 10 on nav maze random goal 02. This translates in a drastic improvement in the data efï¬ ciency of UN- REAL over A3C, requiring less than 10% of the data to reach the ï¬ nal performance of A3C.
1611.05397#25
1611.05397#27
1611.05397
[ "1605.02097" ]
1611.05397#27
Reinforcement Learning with Unsupervised Auxiliary Tasks
We can also measure the robustness of our learning algorithms to hyperparameters by measuring the perfor- mance over all hyperparameters (namely learning rate and entropy cost). This is shown in Figure 3 Top: every auxiliary task in our agent improves robustness. A breakdown of the performance of A3C, UNREAL and UNREAL without pixel control on the individual Labyrinth levels is shown in Figure 4. Unsupervised Reinforcement Learning In order to better understand the beneï¬ ts of auxiliary control tasks we compared it to two simple baselines on three Labyrinth levels.
1611.05397#26
1611.05397#28
1611.05397
[ "1605.02097" ]
1611.05397#28
Reinforcement Learning with Unsupervised Auxiliary Tasks
The ï¬ rst baseline was A3C augmented with a pixel reconstruction loss, which has been shown to improve performance on 3D environments (Kulkarni et al., 2016). The second baseline was A3C augmented with an input change prediction loss, which can be seen as simply predicting the immediate auxiliary reward instead of learning to control. Finally, we include preliminary results for A3C augmented with the feature control auxiliary task on one of the levels. We retuned the hyperparameters of all methods (including learning rate and the weight placed on the auxiliary loss) for each of the three Labyrinth levels. Figure 5 shows the learning curves for the top 5 hyperparameter settings on three Labyrinth navigation levels. The results show that learning to control pixel changes is indeed better than simply predicting immediate pixel changes, which in turn is better than simply learning to reconstruct the input. In fact, learning to reconstruct only led to faster initial learning and actually made the ï¬ nal scores worse when compared to vanilla A3C. Our hypothesis is that input reconstruction hurts ï¬ nal performance because it puts too much focus on reconstructing irrelevant parts of the visual input instead of visual cues for rewards, which rewarding objects are rarely visible. Encouragingly, we saw an improvement from including the feature control auxiliary task. Combining feature control with other auxiliary tasks is a promising future direction.
1611.05397#27
1611.05397#29
1611.05397
[ "1605.02097" ]
1611.05397#29
Reinforcement Learning with Unsupervised Auxiliary Tasks
8 AUC Performance Data Efficiency TopS Speedup It hallway slope 27% 70% Bx It horse. shoe color % Ess, % fenestens Poin. oz nav_maze all random.01 lll 71% ss. es Bs. 251% ier ig â nav.maze all random_02 B70% a Bix a 3x a Bix Ex mm | is fmm 30% 7% nay maze_all_random_03 _ -maze.allrandom 4 039% nav_maze random goal 01 02% â | nav maze random goal 02 Bihices, 509% Â¥ nav maze random goal 15% = dom goal 03 15% nav.maze static 01 88% Bx 22% : nav maze static 02 assy, " 210% 2 nav maze static.03 Menâ a. cess sr, Tx seekavoid arena O1 o be stsiray-tosmelon 0 ts Mean oom foe 243% 3x Median 7% | 210% Be # UNREAL # ABC+RP+VR Figure 4: A breakdown of the improvement over A3C due to our auxiliary tasks for each level on Labyrinth. The values for A3C+RP+VR (reward prediction and value function replay) and UNREAL (reward prediction, value function replay and pixel control) are normalised by the A3C value. AUC Performance gives the robust- ness to hyperparameters (area under the robustness curve Figure 3 Right).
1611.05397#28
1611.05397#30
1611.05397
[ "1605.02097" ]
1611.05397#30
Reinforcement Learning with Unsupervised Auxiliary Tasks
Data Efï¬ ciency is area under the mean learning curve for the top-5 jobs, and Top5 Speedup is the speedup for the mean of the top-5 jobs to reach the maximum top-5 mean score set by A3C. Speedup is not deï¬ ned for stairway to melon as A3C did not learn throughout training. ray_maze random, goal o1 nay_maze_all random 01 â asc â A3C + Input reconstruction â A3C + Input change prediction ABC + Pixel Contral â A3C + Feature Control A3C + Pixel Control Figure 5:
1611.05397#29
1611.05397#31
1611.05397
[ "1605.02097" ]
1611.05397#31
Reinforcement Learning with Unsupervised Auxiliary Tasks
Comparison of various forms of self-supervised learning on random maze navigation. Adding an input reconstruction loss to the objective leads to faster learning compared to an A3C baseline. Predicting changes in the inputs works better than simple image reconstruction. Learning to control changes leads to the best results. 4.2 ATARI We applied the UNREAL agent as well as UNREAL without pixel control to 57 Atari games from the Arcade Learning Environment (Bellemare et al., 2012) domain. We use the same evaluation protocol as for our Labyrinth experiments where we evaluate 50 different random hyper parameter settings (learning rate and entropy cost) on each game.
1611.05397#30
1611.05397#32
1611.05397
[ "1605.02097" ]
1611.05397#32
Reinforcement Learning with Unsupervised Auxiliary Tasks
The results are shown in the bottom row of Figure 3. The left side shows the average performance curves of the top 3 agents for all three meth- ods the right half shows sorted average human-normalised scores for each hyperparameter setting. More detailed learning curves for individual levels can be found in Figure 7. We see that UNREAL surpasses the current state-of-the-art agents, i.e. A3C and Prioritized Dueling DQN (Wang et al., 2016), across all levels attaining 880% mean and 250% median performance. Notably, UNREAL is also substantially more robust to hyper parameter settings than A3C.
1611.05397#31
1611.05397#33
1611.05397
[ "1605.02097" ]
1611.05397#33
Reinforcement Learning with Unsupervised Auxiliary Tasks
# 5 CONCLUSION We have shown how augmenting a deep reinforcement learning agent with auxiliary control and re- ward prediction tasks can drastically improve both data efï¬ ciency and robustness to hyperparameter settings. Most notably, our proposed UNREAL architecture more than doubled the previous state- of-the-art results on the challenging set of 3D Labyrinth levels, bringing the average scores to over 87% of human scores. The same UNREAL architecture also signiï¬ cantly improved both the learning speed and the robustness of A3C over 57 Atari games.
1611.05397#32
1611.05397#34
1611.05397
[ "1605.02097" ]
1611.05397#34
Reinforcement Learning with Unsupervised Auxiliary Tasks
9 # ACKNOWLEDGEMENTS We thank Charles Beattie, Julian Schrittwieser, Marcus Wainwright, and Stig Petersen for environ- ment design and development, and Amir Sadik and Sarah York for expert human game testing. We also thank Joseph Modayil, Andrea Banino, Hubert Soyer, Razvan Pascanu, and Raia Hadsell for many helpful discussions. # REFERENCES Andr´e Barreto, R´emi Munos, Tom Schaul, and David Silver.
1611.05397#33
1611.05397#35
1611.05397
[ "1605.02097" ]
1611.05397#35
Reinforcement Learning with Unsupervised Auxiliary Tasks
Successor features for transfer in reinforcement learning. arXiv preprint arXiv:1606.05312, 2016. Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning envi- ronment: An evaluation platform for general agents. Journal of Artiï¬ cial Intelligence Research, 2012. # OpenArena contributors. The openarena manual. 2005. URL http://openarena.wikia. com/wiki/Manual.
1611.05397#34
1611.05397#36
1611.05397
[ "1605.02097" ]
1611.05397#36
Reinforcement Learning with Unsupervised Auxiliary Tasks
Peter Dayan. Improving generalization for temporal difference learning: The successor representa- tion. Neural Computation, 5(4):613â 624, 1993. Felix A Gers, J¨urgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with lstm. Neural computation, 12(10):2451â 2471, 2000. id software. Quake3. 1999. URL https://github.com/id-Software/ Quake-III-Arena.
1611.05397#35
1611.05397#37
1611.05397
[ "1605.02097" ]
1611.05397#37
Reinforcement Learning with Unsupervised Auxiliary Tasks
MichaÅ Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja´skowski. Viz- doom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097, 2016. George Konidaris and Andre S Barreto. Skill discovery in continuous reinforcement learning do- mains using skill chaining. In Advances in Neural Information Processing Systems, pp. 1015â 1023, 2009. Tejas D Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J Gershman. Deep successor reinforcement learning. arXiv preprint arXiv:1606.02396, 2016. Guillaume Lample and Devendra Singh Chaplot. Playing FPS games with deep reinforcement learn- ing.
1611.05397#36
1611.05397#38
1611.05397
[ "1605.02097" ]
1611.05397#38
Reinforcement Learning with Unsupervised Auxiliary Tasks
CoRR, abs/1609.05521, 2016. Xiujun Li, Lihong Li, Jianfeng Gao, Xiaodong He, Jianshu Chen, Li Deng, and Ji He. Recurrent reinforcement learning: A hybrid approach. arXiv preprint arXiv:1509.03044, 2015. Long-Ji Lin and Tom M Mitchell. Memory approaches to reinforcement learning in non-markovian domains. Technical report, Carnegie Mellon University, School of Computer Science, 1992.
1611.05397#37
1611.05397#39
1611.05397
[ "1605.02097" ]
1611.05397#39
Reinforcement Learning with Unsupervised Auxiliary Tasks
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Andrea Banino, Hubert Soyer, Andy Ballard, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, and Raia Hadsell. Learning to navigate in complex environments. 2016. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In NIPS Deep Learn- ing Workshop. 2013. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Pe- tersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis.
1611.05397#38
1611.05397#40
1611.05397
[ "1605.02097" ]
1611.05397#40
Reinforcement Learning with Unsupervised Auxiliary Tasks
Human-level control through deep reinforcement learning. Nature, 518(7540):529â 533, 02 2015. URL http://dx.doi.org/10.1038/ nature14236. 10 Volodymyr Mnih, Adri`a Puigdom`enech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML), pp. 1928â
1611.05397#39
1611.05397#41
1611.05397
[ "1605.02097" ]
1611.05397#41
Reinforcement Learning with Unsupervised Auxiliary Tasks
1937, 2016. Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video prediction using deep networks in atari games. In Advances in Neural Information Process- ing Systems, pp. 2863â 2871, 2015. Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception, and action in minecraft. arXiv preprint arXiv:1605.09128, 2016. Jing Peng and Ronald J Williams. Incremental multi-step q-learning. Machine Learning, 22(1-3): 283â
1611.05397#40
1611.05397#42
1611.05397
[ "1605.02097" ]
1611.05397#42
Reinforcement Learning with Unsupervised Auxiliary Tasks
290, 1996. Daniel L Schacter, Donna Rose Addis, Demis Hassabis, Victoria C Martin, R Nathan Spreng, and Karl K Szpunar. The future of memory: remembering, imagining, and the brain. Neuron, 76(4): 677â 694, 2012. Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approxima- tors. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1312â 1320, 2015a.
1611.05397#41
1611.05397#43
1611.05397
[ "1605.02097" ]
1611.05397#43
Reinforcement Learning with Unsupervised Auxiliary Tasks
Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015b. J¨urgen Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990â 2010). IEEE Transactions on Autonomous Mental Development, 2(3):230â 247, 2010. David Silver and Kamil Ciosek. Compositional planning using optimal option models. arXiv preprint arXiv:1206.6473, 2012. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search.
1611.05397#42
1611.05397#44
1611.05397
[ "1605.02097" ]
1611.05397#44
Reinforcement Learning with Unsupervised Auxiliary Tasks
Nature, 529(7587):484â 489, 2016. Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057â 1063, 1999a. Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning.
1611.05397#43
1611.05397#45
1611.05397
[ "1605.02097" ]
1611.05397#45
Reinforcement Learning with Unsupervised Auxiliary Tasks
Artiï¬ cial intelligence, 1999b. Richard S Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M Pilarski, Adam White, and Doina Precup. Horde: A scalable real-time architecture for learning knowledge from unsuper- vised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 761â 768. International Foundation for Autonomous Agents and Multiagent Systems, 2011. Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierar- chical approach to lifelong learning in minecraft. arXiv preprint arXiv:1604.07255, 2016. Z. Wang, N. de Freitas, and M. Lanctot.
1611.05397#44
1611.05397#46
1611.05397
[ "1605.02097" ]
1611.05397#46
Reinforcement Learning with Unsupervised Auxiliary Tasks
Dueling Network Architectures for Deep Reinforcement Learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016. Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989. Christopher Xie, Sachin Patil, Teodor Mihai Moldovan, Sergey Levine, and Pieter Abbeel. Model- based reinforcement learning with parametrized physical models and optimism-driven explo- ration. CoRR, abs/1509.06824, 2015.
1611.05397#45
1611.05397#47
1611.05397
[ "1605.02097" ]
1611.05397#47
Reinforcement Learning with Unsupervised Auxiliary Tasks
Tom Zahavy, Nir Ben Zrihem, and Shie Mannor. Graying the black box: Understanding dqns. In Proceedings of the 33rd International Conference on Machine Learning, 2016. 11 # A ATARI GAMES Figure 6: Learning curves for three example Atari games. Semi-transparent lines are agents with different seeds and hyperparameters, the bold line is a mean over population and dotted line is the best agent (in terms of ï¬ nal performance). # B IMPLEMENTATION DETAILS
1611.05397#46
1611.05397#48
1611.05397
[ "1605.02097" ]
1611.05397#48
Reinforcement Learning with Unsupervised Auxiliary Tasks
The input to the agent at each timestep was an 84 84 RGB image. All agents processed the input with the convolutional neural network (CNN) originally used for Atari by Mnih et al. (2013). The network consists of two convolutional layers. The ï¬ rst one has 16 8 8 ï¬ lters applied with stride 4, while the second one has 32 4 4 ï¬ lters with stride 2. This is followed by a fully connected layer with 256 units. All three layers are followed by a ReLU non-linearity. All agents used an LSTM with forget gates (Gers et al., 2000) with 256 cells which take in the CNN-encoded observation concatenated with the previous action taken and curren:t reward. The policy and value function are linear projections of the LSTM output. The agent is trained with 20-step unrolls. The action space of the agent in the environment is game dependent for Atari (between 3 and 18 discrete actions), and 17 discrete actions for Labyrinth. Labyrinth runs at 60 frames-per-second. We use an action repeat of four, meaning that each action is repeated four times, with the agent receiving the ï¬ nal fourth frame as input to the next processing step. 80 crop of the For the pixel control auxiliary tasks we trained policies to control the central 80 inputs. The cropped region was subdivided into a 20 4 cells. The instantaneous reward in each cell was deï¬ ned as the average absolute difference from the previous frame, where the average is taken over both pixels and channels in the cell. The output tensor of auxiliary values, Qaux, is produced from the LSTM outputs by a deconvolutional network. The 7 spatial feature map with a linear layer followed by a LSTM outputs are ï¬ rst mapped to a 32 ReLU. Deconvolution layers with 1 and Nact ï¬ lters of size 4 7 into a value tensor and an advantage tensor respectively. The spatial map is then decoded into Q-values using the dueling parametrization (Wang et al., 2016) producing the Nact à The architecture for feature control was similar. We learned to control the second hidden layer, which is a spatial feature map with size 32 9.
1611.05397#47
1611.05397#49
1611.05397
[ "1605.02097" ]
1611.05397#49
Reinforcement Learning with Unsupervised Auxiliary Tasks
Similarly to pixel control, we exploit the spatial structure in the data and used a deconvolutional network to produce Qaux from the LSTM outputs. Further details are included in the supplementary materials. à The reward prediction task is performed on a sequence of three observations, which are fed through three instances of the agentâ s CNN. The three encoded CNN outputs are concatenated and fed through a fully connected layer of 128 units with ReLU activations, followed by a ï¬ nal linear three- class classiï¬ er and softmax. The reward is predicted as one of three classes: positive, negative, or zero and trained with a task weight λRP = 1. The value function replay is performed on a sequence of length 20 with a task weight λVR = 1. The auxiliary tasks are performed every 20 environment steps, corresponding to every update of the base A3C agent, once the replay buffer has ï¬ lled with agent experience. The replay buffer stores the most recent 2k observations, actions, and rewards taken by the base agent. The agents are optimised over 32 asynchronous threads with shared RMSprop (Mnih et al., 2016). The learning rates are sampled from a log-uniform distribution between 0.0001 and 0.005. The entropy costs are sampled from the log-uniform distribution between 0.0005 and 0.01. Task weight λPC is sampled from log-uniform distribution between 0.01 and 0.1 for Labyrinth and 0.0001 and 0.01 for Atari (since Atari games are not homogeneous in terms of pixel intensities changes, thus we need to ï¬ t this normalization factor).
1611.05397#48
1611.05397#50
1611.05397
[ "1605.02097" ]
1611.05397#50
Reinforcement Learning with Unsupervised Auxiliary Tasks
12 C LABYRINTH LEVELS +10 Melon -ILemon Agent +1 Apple # stairway_to_melon stairway to melon seekavoid arena 01 Agent TLAPPIC 16 Goal Agent +1Apple +10Goal # nav maze 01 nav maze â # nav maze â 02 nav maze 03 lt horse shoe color +10 Goal +1 Apple Agente â â agent Powte-ups # lt_hallway_slope levels show Figure 7: Top-down renderings of each Labyrinth level. The nav maze one example maze layout. In the all random case, a new maze was randomly generated at the start of each episode.
1611.05397#49
1611.05397#51
1611.05397
[ "1605.02097" ]
1611.05397#51
Reinforcement Learning with Unsupervised Auxiliary Tasks
13 stairway_to_melon nav_maze*01 Figure 8: Example images from the agentâ s egocentric viewpoint for each Labyrinth level. 14
1611.05397#50
1611.05397
[ "1605.02097" ]
1611.02779#0
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
6 1 0 2 v o N 0 1 ] I A . s c [ 2 v 9 7 7 2 0 . 1 1 6 1 : v i X r a Under review as a conference paper at ICLR 2017 # RL?: FAST REINFORCEMENT LEARNING VIA SLOW REINFORCEMENT LEARNING Yan Duan/?, John Schulmanâ , Xi Chenâ *, Peter L. Bartlettâ , Ilya Sutskever', Pieter Abbeelâ t Â¥ UC Berkeley, Department of Electrical Engineering and Computer Science OpenAI {rocky, joschu, peter}@openai.com, [email protected], {ilyasu, pieter}@openai.com # ABSTRACT
1611.02779#1
1611.02779
[ "1511.06295" ]
1611.02779#1
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Deep reinforcement learning (deep RL) has been successful in learning sophis- ticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, bene- fiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a â fastâ reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL?, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose (â
1611.02779#0
1611.02779#2
1611.02779
[ "1511.06295" ]
1611.02779#2
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
slowâ ) RL algorithm. The RNN receives all information a typical RL algorithm would receive, including ob- servations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the â fastâ RL algorithm on the current (previously unseen) MDP. We evaluate RL? experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-armed ban- dit problems and finite MDPs. After RL? is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the large- scale side, we test RL? on a vision-based navigation task and show that it scales up to high-dimensional problems.
1611.02779#1
1611.02779#3
1611.02779
[ "1511.06295" ]
1611.02779#3
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
# 1 INTRODUCTION In recent years, deep reinforcement learning has achieved many impressive results, including playing Atari games from raw pixels (Guo et al., 2014; Mnih et al., 2015; Schulman et al., 2015), and acquiring advanced manipulation and locomotion skills (Levine et al., 2016; Lillicrap et al., 2015; Watter et al., 2015; Heess et al., 2015; Schulman et al., 2015; 2016). However, many of the successes come at the expense of high sample complexity. For example, the state-of-the-art Atari results require tens of thousands of episodes of experience (Mnih et al., 2015) per game. To master a game, one would need to spend nearly 40 days playing it with no rest. In contrast, humans and animals are capable of learning a new task in a very small number of trials. Continuing the previous example, the human player in Mnih et al. (2015) only needed 2 hours of experience before mastering a game. We argue that the reason for this sharp contrast is largely due to the lack of a good prior, which results in these deep RL agents needing to rebuild their knowledge about the world from scratch. Although Bayesian reinforcement learning provides a solid framework for incorporating prior knowledge into the learning process (Strens, 2000; Ghavamzadeh et al., 2015; Kolter & Ng, 2009), exact computation of the Bayesian update is intractable in all but the simplest cases. Thus, practi- cal reinforcement learning algorithms often incorporate a mixture of Bayesian and domain-specific ideas to bring down sample complexity and computational burden. Notable examples include guided policy search with unknown dynamics (Levine & Abbeel, 2014) and PILCO (Deisenroth & Ras- mussen, 2011). These methods can learn a task using a few minutes to a few hours of real experience, compared to days or even weeks required by previous methods (Schulman et al., 2015; 2016; Lilli- crap et al., 2015).
1611.02779#2
1611.02779#4
1611.02779
[ "1511.06295" ]
1611.02779#4
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
However, these methods tend to make assumptions about the environment (e.g., instrumentation for access to the state at learning time), or become computationally intractable in high-dimensional settings (Wahlstrém et al., 2015). Under review as a conference paper at ICLR 2017 Rather than hand-designing domain-specific reinforcement learning algorithms, we take a different approach in this paper: we view the learning process of the agent itself as an objective, which can be optimized using standard reinforcement learning algorithms. The objective is averaged across all possible MDPs according to a specific distribution, which reflects the prior that we would like to distill into the agent. We structure the agent as a recurrent neural network, which receives past rewards, actions, and termination flags as inputs in addition to the normally received observations. Furthermore, its internal state is preserved across episodes, so that it has the capacity to perform learning in its own hidden activations. The learned agent thus also acts as the learning algorithm, and can adapt to the task at hand when deployed. We evaluate this approach on two sets of classical problems, multi-armed bandits and tabular MDPs. These problems have been extensively studied, and there exist algorithms that achieve asymptoti- cally optimal performance. We demonstrate that our method, named RL?, can achieve performance comparable with these theoretically justified algorithms. Next, we evaluate RL? on a vision-based navigation task implemented using the ViZDoom environment (Kempka et al., 2016), showing that RL? can also scale to high-dimensional problems. 2 METHOD 2.1 PRELIMINARIES We define a discrete-time finite-horizon discounted Markov decision process (MDP) by a tuple M = (S,A,P,7r, po, 7,1), in which S is a state set, A an action set, P : S x Ax S + R, a transition probability distribution, r : S x A â [â Rmax. Rmax] a bounded reward function, pp : S + Ry an initial state distribution, y â ¬ (0, 1] a discount factor, and T the horizon. In policy search methods, we typically optimize a stochastic policy 7g : S x A â R, parametrized by 0.
1611.02779#3
1611.02779#5
1611.02779
[ "1511.06295" ]
1611.02779#5
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
The objective is to maximize its expected discounted return, (79) = E, an 9'r(s¢, az)], where T = (89, a0, .--) denotes the whole trajectory, s9 ~ po(80), a4 ~ To (az|Sz), and 5441 ~ P(se41|S¢, at). 2.2 FORMULATION We now describe our formulation, which casts learning an RL algorithm as a reinforcement learning problem, and hence the name RL?. We assume knowledge of a set of MDPs, denoted by M, and a distribution over them: pry : M â R ,.
1611.02779#4
1611.02779#6
1611.02779
[ "1511.06295" ]
1611.02779#6
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
We only need to sample from this distribution. We use n. to denote the total number of episodes allowed to spend with a specific MDP. We define a trial to be such a series of episodes of interaction with a fixed MDP. --Agent o-- ho Trial 1 Trial 2 Figure 1: Procedure of agent-environment interaction This process of interaction between an agent and the environment is illustrated in Figure 1. Here, each trial happens to consist of two episodes, hence n = 2. For each trial, a separate MDP is drawn from py, and for each episode, a fresh so is drawn from the initial state distribution specific to the corresponding MDP. Upon receiving an action a; produced by the agent, the environment computes reward r;, steps forward, and computes the next state s;,,. If the episode has terminated, it sets termination flag d, to 1, which otherwise defaults to 0. Together, the next state s,41, action
1611.02779#5
1611.02779#7
1611.02779
[ "1511.06295" ]
1611.02779#7
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Under review as a conference paper at ICLR 2017 az, reward r;, and termination flag d;, are concatenated to form the input to the policy', which, conditioned on the hidden state h;41, generates the next hidden state h;2 and action a;,,. At the end of an episode, the hidden state of the policy is preserved to the next episode, but not preserved between trials. The objective under this formulation is to maximize the expected total discounted reward accumu- lated during a single trial rather than a single episode. Maximizing this objective is equivalent to minimizing the cumulative pseudo-regret (Bubeck & Cesa-Bianchi, 2012). Since the underlying MDP changes across trials, as long as different strategies are required for different MDPs, the agent must act differently according to its belief over which MDP it is currently in. Hence, the agent is forced to integrate all the information it has received, including past actions, rewards, and termi- nation flags, and adapt its strategy continually. Hence, we have set up an end-to-end optimization process, where the agent is encouraged to learn a â
1611.02779#6
1611.02779#8
1611.02779
[ "1511.06295" ]
1611.02779#8
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
fastâ reinforcement learning algorithm. For clarity of exposition, we have defined the â innerâ problem (of which the agent sees n each trials) to be an MDP rather than a POMDP. However, the method can also be applied in the partially- observed setting without any conceptual changes. In the partially observed setting, the agent is faced with a sequence of POMDPs, and it receives an observation 0; instead of state s; at time t. The visual navigation experiment in Section 3.3, is actually an instance of the this POMDP setting. 2.3 POLICY REPRESENTATION We represent the policy as a general recurrent neural network. Each timestep, it receives the tuple (s,a,7r,d) as input, which is embedded using a function ¢(s,a,7,d) and provided as input to an RNN. To alleviate the difficulty of training RNNs due to vanishing and exploding gradients (Bengio et al., 1994), we use Gated Recurrent Units (GRUs) (Cho et al., 2014) which have been demonstrated to have good empirical performance (Chung et al., 2014; Jozefowicz et al., 2015). The output of the GRU is fed to a fully connected layer followed by a softmax function, which forms the distribution over actions. We have also experimented with alternative architectures which explicitly reset part of the hidden state each episode of the sampled MDP, but we did not find any improvement over the simple archi- tecture described above. 2.4 POLICY OPTIMIZATION After formulating the task as a reinforcement learning problem, we can readily use standard off-the- shelf RL algorithms to optimize the policy. We use a first-order implementation of Trust Region Policy Optimization (TRPO) (Schulman et al., 2015), because of its excellent empirical perfor- mance, and because it does not require excessive hyperparameter tuning. For more details, we refer the reader to the original paper. To reduce variance in the stochastic gradient estimation, we use a baseline which is also represented as an RNN using GRUs as building blocks. We optionally apply Generalized Advantage Estimation (GAE) (Schulman et al., 2016) to further reduce the variance. # 3 EVALUATION We designed experiments to answer the following questions:
1611.02779#7
1611.02779#9
1611.02779
[ "1511.06295" ]
1611.02779#9
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
e Can RL? learn algorithms that achieve good performance on MDP classes with special structure, relative to existing algorithms tailored to this structure that have been proposed in the literature? e Can RL? scale to high-dimensional tasks? For the first question, we evaluate RL? on two sets of tasks, multi-armed bandits (MAB) and tabular MDPs. These problems have been studied extensively in the reinforcement learning literature, and this body of work includes algorithms with guarantees of asymptotic optimality. We demonstrate that our approach achieves comparable performance to these theoretically justified algorithms. 'To make sure that the inputs have a consistent dimension, we use placeholder values for the initial input to the policy. Under review as a conference paper at ICLR 2017 For the second question, we evaluate RL? on a vision-based navigation task. Our experiments show that the learned policy makes effective use of the learned visual information and also short-term information acquired from previous episodes.
1611.02779#8
1611.02779#10
1611.02779
[ "1511.06295" ]
1611.02779#10
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
3.1 MULTI-ARMED BANDITS Multi-armed bandit problems are a subset of MDPs where the agentâ s environment is stateless. Specifically, there are k arms (actions), and at every time step, the agent pulls one of the arms, say i, and receives a reward drawn from an unknown distribution: our experiments take each arm to be a Bernoulli distribution with parameter p;. The goal is to maximize the total reward obtained over a fixed number of time steps. The key challenge is balancing exploration and exploitationâ
1611.02779#9
1611.02779#11
1611.02779
[ "1511.06295" ]
1611.02779#11
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
â exploringâ each arm enough times to estimate its distribution (p;), but eventually switching over to â exploitationâ of the best arm. Despite the simplicity of multi-arm bandit problems, their study has led to a rich theory and a collection of algorithms with optimality guarantees. Using RL?, we can train an RNN policy to solve bandit problems by training it on a given distribution pm. If the learning is successful, the resulting policy should be able to perform competitively with the theoretically optimal algorithms. We randomly generated bandit problems by sampling each parameter p; from the uniform distribution on [0, 1]. After training the RNN policy with RLâ , we compared it against the following strategies: e Random: this is a baseline strategy, where the agent pulls a random arm each time. e Gittins index (Gittins, 1979): this method gives the Bayes optimal solution in the dis- counted infinite-horizon case, by computing an index separately for each arm, and taking the arm with the largest index. While this work shows it is sufficient to independently com- pute an index for each arm (hence avoiding combinatorial explosion with the number of arms), it doesnâ t show how to tractably compute these individual indices exactly. We fol- low the practical approximations described in Gittins et al. (2011), Chakravorty & Mahajan (2013), and Whittle (1982), and choose the best-performing approximation for each setup. e UCBI (Auer, 2002): this method estimates an upper-confidence bound, and pulls the arm with the largest value of ucb;(t) = fi;(tâ 1)+¢,/ Hep: where /i;(t â 1) is the estimated mean parameter for the ith arm, T;(¢â 1) is the number of times the ith arm has been pulled, and c is a tunable hyperparameter (Audibert & Munos, 2011). We initialize the statistics with exactly one success and one failure, which corresponds to a Beta(1, 1) prior. e Thompson sampling (TS) (Thompson, 1933): this is a simple method which, at each time step, samples a list of arm means from the posterior distribution, and choose the best arm according to this sample.
1611.02779#10
1611.02779#12
1611.02779
[ "1511.06295" ]
1611.02779#12
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
It has been demonstrated to compare favorably to UCB1 empir- ically (Chapelle & Li, 2011). We also experiment with an optimistic variant (OTS) (May et al., 2012), which samples N times from the posterior, and takes the one with the highest probability. e ¢«-Greedy: in this strategy, the agent chooses the arm with the best empirical mean with probability 1 â â ¬, and chooses a random arm with probability «. We use the same initial- ization as UCB1. e Greedy: this is a special case of e-Greedy with « = 0. The Bayesian methods, Gittins index and Thompson sampling, take advantage of the distribution pm; and we provide these methods with the true distribution. For each method with hyperparame- ters, we maximize the score with a separate grid search for each of the experimental settings. The hyperparameters used for TRPO are shown in the appendix. The results are summarized in Table 1. Learning curves for various settings are shown in Figure 2. We observe that our approach achieves performance that is almost as good as the the reference meth- ods, which were (human) designed specifically to perform well on multi-armed bandit problems. It is worth noting that the published algorithms are mostly designed to minimize asymptotic regret (rather than finite horizon regret), hence there tends to be a little bit of room to outperform them in the finite horizon settings. Under review as a conference paper at ICLR 2017
1611.02779#11
1611.02779#13
1611.02779
[ "1511.06295" ]
1611.02779#13
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Table 1: MAB Results. Each grid cell records the total reward averaged over 1000 different instances of the bandit problem. We consider k ⠬ {5,10,50} bandits and n ⠬ {10,100,500} episodes of interaction. We highlight the best-performing algorithms in each setup according to the computed mean, and we also highlight the other algorithms in that row whose performance is not significantly different from the best one (determined by a one-sided t-test with p = 0.05). Setup Random _ Gittins TS OTS UCB1 «Greedy Greedy RL? n=10,k=5 5.0 6.6 5.7 6.5 6.7 6.6 6.6 6.7 n=10,k=10 5.0 6.6 5.5 6.2 6.7 6.6 6.6 6.7 n=10,k=50 5.1 6.5 5.2 5.5 6.6 6.5 6.5 6.8 n=100,k=5 49.9 78.3 74.7 77.9 78.0 75.4 74.8 78.7 n=100,k=10 49.9 82.8 76.7 81.4 82.4 77.4 77.1 83.5 n=100,k =50 49.8 85.2 64.5 67.7 84.3 78.3 78.0 84.9 n=500,k=5 249.8 405.8 402.0 406.7 405.8 388.2 380.6 401.6 n=500,k =10 249.0 437.8 429.5 438.9 437.1 408.0 395.0 432.5 n=500,k =50 249.6 463.7 427.2 437.6 457.6 413.6 402.8 438.9 Normalize total reward (a)n = 10 (b) n = 100 (c) n = 500 Figure 2:
1611.02779#12
1611.02779#14
1611.02779
[ "1511.06295" ]
1611.02779#14
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
RL? learning curves for multi-armed bandits. Performance is normalized such that Gittins index scores 1, and random policy scores 0. We observe that there is a noticeable gap between Gittins index and RL? in the most challenging scenario, with 50 arms and 500 episodes. This raises the question whether better architectures or better (slow) RL algorithms should be explored. To determine the bottleneck, we trained the same policy architecture using supervised learning, using the trajectories generated by the Gittins index approach as training data. We found that the learned policy, when executed in test domains, achieved the same level of performance as the Gittins index approach, suggesting that there is room for improvement by using better RL algorithms. 3.2 TABULAR MDPs The bandit problem provides a natural and simple setting to investigate whether the policy learns to trade off between exploration and exploitation. However, the problem itself involves no sequen- tial decision making, and does not fully characterize the challenges in solving MDPs. Hence, we perform further experiments using randomly generated tabular MDPs, where there is a finite num- ber of possible states and actionsâ small enough that the transition probability distribution can be explicitly given as a table. We compare our approach with the following methods: e Random: the agent chooses an action uniformly at random for each time step; e PSRL (Strens, 2000; Osband et al., 2013): this is a direct generalization of Thompson sam- pling to MDPs, where at the beginning of each episode, we sample an MDP from the pos- terior distribution, and take actions according to the optimal policy for the entire episode. Similarly, we include an optimistic variant (OPSRL), which has also been explored in Os- band & Van Roy (2016). e BEB (Kolter & Ng, 2009): this is a model-based optimistic algorithm that adds an explo- ration bonus to (thus far) infrequently visited states and actions. Under review as a conference paper at ICLR 2017 e UCRL2 (Jaksch et al., 2010): this algorithm computes, at each iteration, the optimal pol- icy against an optimistic MDP under the current belief, using an extended value iteration procedure.
1611.02779#13
1611.02779#15
1611.02779
[ "1511.06295" ]
1611.02779#15
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
e «-Greedy: this algorithm takes actions optimal against the MAP estimate according to the current posterior, which is updated once per episode. e Greedy: a special case of e-Greedy with « = 0. # Table 2: Random MDP Results Setup Random PSRL OPSRL UCRL2 BEB «-Greedy Greedy RL? n=10 100.1 138.1 144.1 146.6 150.2 132.8 134.8 156.2 n=25 250.2 408.8 425.2 424.1 427.8 377.3 368.8 445.7 n=50 499.7 904.4 930.7 918.9 917.8 823.3 769.3 936.1 n=75 749.9 1417.1 1449.2 1427.6 1422.6 1293.9 1172.9 1428.8 n=100 999.4 1939.5 1973.9 1942.1 1935.1 1778.2 1578.5 1913.7 The distribution over MDPs is constructed with |S| = 10, |A| = 5. The rewards follow a Gaus- sian distribution with unit variance, and the mean parameters are sampled independently from Normal(1,1). The transitions are sampled from a flat Dirichlet distribution. This construction matches the commonly used prior in Bayesian RL methods. We set the horizon for each episode to be T = 10, and an episode always starts on the first state. g a E 5 z 0 1000 5000 Iteration Figure 3: RL? learning curves for tabular MDPs. Performance is normalized such that OPSRL scores 1, and random policy scores 0. The results are summarized in Table 2, and the learning curves are shown in Figure 3. We follow the same evaluation procedure as in the bandit case. We experiment with n ⠬ {10, 25,50, 75, 100}. For fewer episodes, our approach surprisingly outperforms existing methods by a large margin.
1611.02779#14
1611.02779#16
1611.02779
[ "1511.06295" ]
1611.02779#16
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
The advantage is reversed as n increases, suggesting that the reinforcement learning problem in the outer loop becomes more challenging to solve. We think that the advantage for small n comes from the need for more aggressive exploitation: since there are 140 degrees of freedom to estimate in order to characterize the MDP, and by the 10th episode, we will not have enough samples to form a good estimate of the entire dynamics. By directly optimizing the RNN in this setting, our approach should be able to cope with this shortage of samples, and decides to exploit sooner compared to the reference algorithms.
1611.02779#15
1611.02779#17
1611.02779
[ "1511.06295" ]
1611.02779#17
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
3.3. VISUAL NAVIGATION The previous two tasks both only involve very low-dimensional state spaces. To evaluate the fea- sibility of scaling up RLâ , we further experiment with a challenging vision-based task, where the Under review as a conference paper at ICLR 2017 agent is asked to navigate a randomly generated maze to find a randomly placed targetâ . The agent receives a +1 reward when it reaches the target, â 0.001 when it hits the wall, and â 0.04 per time step to encourage it to reach targets faster. It can interact with the maze for multiple episodes, dur- ing which the maze structure and target position are held fixed. The optimal strategy is to explore the maze efficiently during the first episode, and after locating the target, act optimally against the current maze and target based on the collected information. An illustration of the task is given in Figure 4. de (a) Sample observation (b) Layout of the 5 x 5 maze in (a) (c) Layout of a9 x 9 maze Figure 4: Visual navigation. The target block is shown in red, and occupies an entire grid in the maze layout. Visual navigation alone is a challenging task for reinforcement learning. The agent only receives very sparse rewards during training, and does not have the primitives for efficient exploration at the beginning of training. It also needs to make efficient use of memory to decide how it should explore the space, without forgetting about where it has already explored. Previously, Oh et al. (2016) have studied similar vision-based navigation tasks in Minecraft. However, they use higher-level actions for efficient navigation. Similar high-level actions in our task would each require around 5 low-level actions combined in the right way. In contrast, our RL? agent needs to learn these higher-level actions from scratch. We use a simple training setup, where we use small mazes of size 5 x 5, with 2 episodes of interac- tion, each with horizon up to 250. Here the size of the maze is measured by the number of grid cells along each wall in a discrete representation of the maze. During each trial, we sample 1| out of 1000 randomly generated configurations of map layout and target positions. During testing, we evaluate on 1000 separately generated configurations.
1611.02779#16
1611.02779#18
1611.02779
[ "1511.06295" ]
1611.02779#18
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
In addition, we also study its extrapolation behavior along two axes, by (1) testing on large mazes of size 9 x 9 (see Figure 4c) and (2) running the agent for up to 5 episodes in both small and large mazes. For the large maze, we also increase the horizon per episode by 4x due to the increased size of the maze. Table 3: Results for visual navigation. These metrics are computed using the best run among all runs shown in Figure 5. In 3c, we measure the proportion of mazes where the trajectory length in the second episode does not exceed the trajectory length in the first episode. (a) Average length of successful trajectories (b) %Success (c) %Improved Episode Small Large Episode Small Large Small Large 1 1.3 180.1+6.0 1 99.3% 97.1% 91.7% 71.4% 2 0.9 151.8+45.9 2 99.6% 96.7% 3 1.0 169. 6.3 3 99.7% 95.8% 4 1.1 162. 6.4 4 99.4% 95.6% 5 1.1 169.346.5 5 99.6% 96.1% ?Videos for the task are available at https: //goo.gl/rDDBpb. (a) Average length of successful trajectories (b) %Success (c) %Improved Episode Small Large Episode Small Large Small Large 1 1.3 180.1+6.0 1 99.3% 97.1% 91.7% 71.4% 2 0.9 151.8+45.9 2 99.6% 96.7% 3 1.0 169. 6.3 3 99.7% 95.8% 4 1.1 162. 6.4 4 99.4% 95.6% 5 1.1 169.346.5 5 99.6% 96.1%
1611.02779#17
1611.02779#19
1611.02779
[ "1511.06295" ]
1611.02779#19
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Under review as a conference paper at ICLR 2017 Total reward 0 300 rood 1500 ~â «2000~=~=«S00~=«S000~=S=SC« S00, Iteration Figure 5: RL? learning curves for visual navigation. Each curve shows a different random initial- ization of the RNN weights. Performance varies greatly across different initializations. The results are summarized in Table 3, and the learning curves are shown in Figure 5. We observe that there is a significant reduction in trajectory lengths between the first two episodes in both the smaller and larger mazes, suggesting that the agent has learned how to use information from past episodes. It also achieves reasonable extrapolation behavior in further episodes by maintaining its performance, although there is a small drop in the rate of success in the larger mazes. We also observe that on larger mazes, the ratio of improved trajectories is lower, likely because the agent has not learned how to act optimally in the larger mazes. Still, even on the small mazes, the agent does not learn to perfectly reuse prior information. An illustration of the agentâ s behavior is shown in Figure 6. The intended behavior, which occurs most frequently, as shown in 6a and 6b, is that the agent should remember the targetâ s location, and utilize it to act optimally in the second episode. However, occasionally the agent forgets about where the target was, and continues to explore in the second episode, as shown in 6c and 6d. We believe that better reinforcement learning techniques used as the outer-loop algorithm will improve these results in the future. \ 4 | Uy] (a) Good behavior, Ist (b) Good behavior, 2nd (c) Bad behavior, Ist (d) Bad behavior, 2nd episode episode episode episode Figure 6: Visualization of the agentâ s behavior. In each scenario, the agent starts at the center of the blue block, and the goal is to reach anywhere in the red block. 4 RELATED WORK The concept of using prior experience to speed up reinforcement learning algorithms has been ex- plored in the past in various forms.
1611.02779#18
1611.02779#20
1611.02779
[ "1511.06295" ]
1611.02779#20
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Earlier studies have investigated automatic tuning of hyper- parameters, such as learning rate and temperature (Ishii et al., 2002; Schweighofer & Doya, 2003), as a form of meta-learning. Wilson et al. (2007) use hierarchical Bayesian methods to maintain a posterior over possible models of dynamics, and apply optimistic Thompson sampling according to the posterior. Many works in hierarchical reinforcement learning propose to extract reusable skills from previous tasks to speed up exploration in new tasks (Singh, 1992; Perkins et al., 1999). We refer the reader to Taylor & Stone (2009) for a more thorough survey on the multi-task and transfer learning aspects. Under review as a conference paper at ICLR 2017 More recently, Fu et al. (2015) propose a model-based approach on top of iLQG with unknown dynamics (Levine & Abbeel, 2014), which uses samples collected from previous tasks to build a neural network prior for the dynamics, and can perform one-shot learning on new, but related tasks thanks to reduced sample complexity. There has been a growing interest in using deep neural networks for multi-task learning and transfer learning (Parisotto et al., 2015; Rusu et al., 2015; 2016a; Devin et al., 2016; Rusu et al., 201 6b). In the broader context of machine learning, there has been a lot of interest in one-shot learning for object classification (Vilalta & Drissi, 2002; Fei-Fei et al., 2006; Larochelle et al., 2008; Lake et al., 2011; Koch, 2015). Our work draws inspiration from a particular line of work (Younger et al., 2001; Santoro et al., 2016; Vinyals et al., 2016), which formulates meta-learning as an optimization problem, and can thus be optimized end-to-end via gradient descent. While these work applies to the supervised learning setting, our work applies in the more general reinforcement learning setting. Although the reinforcement learning setting is more challenging, the resulting behavior is far richer: our agent must not only learn to exploit existing information, but also learn to explore, a problem that is usually not a factor in supervised learning.
1611.02779#19
1611.02779#21
1611.02779
[ "1511.06295" ]
1611.02779#21
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Another line of work (Hochreiter et al., 2001; Younger et al., 2001; Andrychowicz et al., 2016; Li & Malik, 2016) studies meta-learning over the optimization process. There, the meta-learner makes explicit updates to a parametrized model. In comparison, we do not use a directly parametrized policy; instead, the recurrent neural network agent acts as the meta-learner and the resulting policy simultaneously. Our formulation essentially constructs a partially observable MDP (POMDP) which is solved in the outer loop, where the underlying MDP is unobserved by the agent. This reduction of an unknown MDP to a POMDP can be traced back to dual control theory (Feldbaum, 1960), where â dualâ refers to the fact that one is controlling both the state and the state estimate.
1611.02779#20
1611.02779#22
1611.02779
[ "1511.06295" ]
1611.02779#22
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Feldbaum pointed out that the solution can in principle be computed with dynamic programming, but doing so is usually im- practical. POMDPs with such structure have also been studied under the name â mixed observability MDPsâ (Ong et al., 2010). However, the method proposed there suffers from the usual challenges of solving POMDPs in high dimensions. # 5 DISCUSSION This paper suggests a different approach for designing better reinforcement learning algorithms: instead of acting as the designers ourselves, learn the algorithm end-to-end using standard rein- forcement learning techniques.
1611.02779#21
1611.02779#23
1611.02779
[ "1511.06295" ]
1611.02779#23
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
That is, the â fastâ RL algorithm is a computation whose state is stored in the RNN activations, and the RNNâ s weights are learned by a general-purpose â slowâ re- inforcement learning algorithm. Our method, RL?, has demonstrated competence comparable with theoretically optimal algorithms in small-scale settings. We have further shown its potential to scale to high-dimensional tasks. In the experiments, we have identified opportunities to improve upon RL?: the outer-loop reinforce- ment learning algorithm was shown to be an immediate bottleneck, and we believe that for settings with extremely long horizons, better architecture may also be required for the policy. Although we have used generic methods and architectures for the outer-loop algorithm and the policy, doing this also ignores the underlying episodic structure. We expect algorithms and policy architectures that exploit the problem structure to significantly boost the performance.
1611.02779#22
1611.02779#24
1611.02779
[ "1511.06295" ]
1611.02779#24
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
ACKNOWLEDGMENTS We would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. This research was funded in part by ONR through a PECASE award. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by a Berkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF through grant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through the ARC Centre of Excellence for Mathematical and Statistical Frontiers.
1611.02779#23
1611.02779#25
1611.02779
[ "1511.06295" ]
1611.02779#25
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
# REFERENCES Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint Under review as a conference paper at ICLR 2017 arXiv: 1606.04474, 2016. Jean-Yves Audibert and Rémi Munos. Introduction to bandits: Algorithms and theory. JCML Tutorial on bandits, 2011.
1611.02779#24
1611.02779#26
1611.02779
[ "1511.06295" ]
1611.02779#26
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 3(Nov):397-422, 2002. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. JEEE transactions on neural networks, 5(2):157-166, 1994. Sébastien Bubeck and Nicolo Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi- armed bandit problems. arXiv preprint arXiv: 1204.5721, 2012. Jhelum Chakravorty and Aditya Mahajan. Multi-armed bandits, gittins index, and its calculation.
1611.02779#25
1611.02779#27
1611.02779
[ "1511.06295" ]
1611.02779#27
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Methods and Applications of Statistics in Clinical Trials: Planning, Analysis, and Inferential Methods, 2:416-435, 2013. Olivier Chapelle and Lihong Li. An empirical evaluation of thompson sampling. In Advances in neural information processing systems, pp. 2249-2257, 2011. Kyunghyun Cho, Bart Van Merriénboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv: 1412.3555, 2014. Marc Deisenroth and Carl E Rasmussen.
1611.02779#26
1611.02779#28
1611.02779
[ "1511.06295" ]
1611.02779#28
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Pilco: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pp. 465-472, 2011. Coline Devin, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, and Sergey Levine. Learning modular neural network policies for multi-task and multi-robot transfer. arXiv preprint arXiv: 1609.07088, 2016. Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories.
1611.02779#27
1611.02779#29
1611.02779
[ "1511.06295" ]
1611.02779#29
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
JEEE transactions on pattern analysis and machine intelligence, 28(4):594â 611, 2006. AA Feldbaum. Dual control theory. i. Avtomatika i Telemekhanika, 21(9):1240-1249, 1960. Justin Fu, Sergey Levine, and Pieter Abbeel. One-shot learning of manipulation skills with online dynamics adaptation and neural network priors. arXiv preprint arXiv: 1509.06841, 2015. Mohammad Ghavamzadeh, Shie Mannor, Joelle Pineau, Aviv Tamar, et al. Bayesian reinforcement learning: a survey. World Scientific, 2015. John Gittins, Kevin Glazebrook, and Richard Weber. Multi-armed bandit allocation indices. John Wiley & Sons, 2011. John C Gittins.
1611.02779#28
1611.02779#30
1611.02779
[ "1511.06295" ]
1611.02779#30
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society. Series B (Methodological), pp. 148-177, 1979. Xiaoxiao Guo, Satinder Singh, Honglak Lee, Richard L Lewis, and Xiaoshi Wang. Deep learning for real-time atari game play using offline monte-carlo tree search planning. In Advances in neural information processing systems, pp. 3338-3346, 2014. Nicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pp. 2944-2952, 2015. Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pp. 87-94. Springer, 2001.
1611.02779#29
1611.02779#31
1611.02779
[ "1511.06295" ]
1611.02779#31
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
10 Under review as a conference paper at ICLR 2017 Shin Ishii, Wako Yoshida, and Junichiro Yoshimoto. Control of exploitationâ exploration meta- parameter in reinforcement learning. Neural networks, 15(4):665â 687, 2002. Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563â 1600, 2010. Rafal Jézefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recur- rent network architectures. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 2342-2350, 2015. URL http: //jmlr.org/proceedings/papers/v37/jozefowicz15.html. Michat Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Jaskowski.
1611.02779#30
1611.02779#32
1611.02779
[ "1511.06295" ]
1611.02779#32
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Viz- doom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv: 1605.02097, 2016. Gregory Koch. Siamese neural networks for one-shot image recognition. PhD thesis, University of Toronto, 2015. J Zico Kolter and Andrew Y Ng. Near-bayesian exploration in polynomial time. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 513-520. ACM, 2009. Brenden M Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B Tenenbaum.
1611.02779#31
1611.02779#33
1611.02779
[ "1511.06295" ]
1611.02779#33
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
One shot learning of simple visual concepts. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, volume 172, pp. 2, 2011. Hugo Larochelle, Dumitru Erhan, and Yoshua Bengio. Zero-data learning of new tasks. In AAAI, volume 1, pp. 3, 2008. Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems, pp. 1071-1079, 2014. Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo- motor policies. Journal of Machine Learning Research, 17(39):1â 40, 2016. Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv: 1606.01885, 2016. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv: 1509.02971, 2015. Benedict C May, Nathan Korda, Anthony Lee, and David S Leslie. Optimistic bayesian sampling in contextual-bandit problems. Journal of Machine Learning Research, 13(Jun):2069-2106, 2012. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle- mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015.
1611.02779#32
1611.02779#34
1611.02779
[ "1511.06295" ]
1611.02779#34
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception, and action in minecraft. arXiv preprint arXiv: 1605.09128, 2016. Sylvie CW Ong, Shao Wei Png, David Hsu, and Wee Sun Lee. Planning under uncertainty for robotic tasks with mixed observability. The International Journal of Robotics Research, 29(8): 1053-1068, 2010. Jan Osband and Benjamin Van Roy. Why is posterior sampling better than optimism for reinforce- ment learning. arXiv preprint arXiv: 1607.00215, 2016. Jan Osband, Dan Russo, and Benjamin Van Roy. (more) efficient reinforcement learning via poste- rior sampling. In Advances in Neural Information Processing Systems, pp. 3003-3011, 2013. Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov.
1611.02779#33
1611.02779#35
1611.02779
[ "1511.06295" ]
1611.02779#35
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
Actor-mimic: Deep multitask and transfer reinforcement learning. arXiv preprint arXiv: 1511.06342, 2015. 11 Under review as a conference paper at ICLR 2017 Theodore J Perkins, Doina Precup, et al. Using options for knowledge transfer in reinforcement learning. University of Massachusetts, Amherst, MA, USA, Tech. Rep, 1999. Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirk- patrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distil- lation. arXiv preprint arXiv:1511.06295, 2015. Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv: 1606.04671, 2016a. Andrei A Rusu, Matej Vecerik, Thomas Rothérl, Nicolas Heess, Razvan Pascanu, and Raia Hadsell. Sim-to-real robot learning from pixels with progressive nets. arXiv preprint arXiv: 1610.04286, 2016b. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One- shot learning with memory-augmented neural networks. arXiv preprint arXiv: 1605.06065, 2016. John Schulman, Sergey Levine, Philipp Moritz, Michael I Jordan, and Pieter Abbeel.
1611.02779#34
1611.02779#36
1611.02779
[ "1511.06295" ]