doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1611.05763
46
# 3.2.2 LEARNING ABSTRACT TASK STRUCTURE In the final experiment we conducted, we took a step towards examining the scalabilty of meta-RL, by studying a task that involves rich visual inputs, longer time horizons and sparse rewards. Additionally, in this experiment we studied a meta-learning task that requires the system to tune into an abstract task structure, in which a series of objects play defined roles which the system must infer.
1611.05763#46
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
47
The task was adapted from a classic study of animal behavior, conducted by Harlow (1949). On each trial in the original task, Harlow presented a monkey with two visually contrasting objects. One of these covered a small well containing a morsel of food; the other covered an empty well. The animal chose freely between the two objects and could retrieve the food reward if present. The stage was then hidden and the left-right positions of the objects were randomly reset. A new trial then began, with the animal again choosing freely. This process continued for a set number of trials using the same two objects. At completion of this set of trials, two entirely new and unfamiliar objects were substituted for the original two, and the process began again. Importantly, within each block of trials, one object was chosen to be consistently rewarded (regardless of its left-right position), with the other being consistently unrewarded. What Harlow (Harlow, 1949) observed was that, after substantial practice, monkeys displayed behavior that reflected an understanding of the task’s rules. When two new objects were presented, the monkey’s first choice between them was necessarily arbitrary. But after observing the outcome of this first choice, the monkey was at ceiling thereafter, always choosing the rewarded object. 10 (a) Two-step task (b) Model predictions
1611.05763#47
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
49
Last trial transition ‘== Common mm Rare Last trial rewarded Last trial not rewarded # (c) LSTM A2C with reward input Figure 5: Three-state MDP modeled after the “two-step task” from Daw et al. (2011). (a) MDP with 3 states and 2 actions. All trials start in state S1, with transition probabilities after taking actions a1 or a2 depicted in the graph. S2 and S3 result in expected rewards ra and rb (see text). (b) Predictions of choice probabilities given either a model-based strategy or a model-free strategy (Daw et al., 2011). Specifically, model-based strategies take into account transition probabilities and would predict an interaction between the amount of reward received on the last trial and the transition (common or uncommon) observed. (c) Agent displays a perfectly model-based profile when given the reward as input.
1611.05763#49
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
50
We anticipated that meta-RL should give rise to the same pattern of abstract one-shot learning. In order to test this, we adapted Harlow’s paradigm into a visual fixation task, as follows. A 84x84 pixel input represented a simulated computer screen (see Figure 6a-c). At the beginning of each trial, this display was blank except for a small central fixation cross (red crosshairs). The agent selected discrete left-right actions which shifted its view approximately 4.4 degrees in the corresponding direction, with a small momentum effect (alternatively, a no-op action could be selected). The completion of a trial required performing two tasks: saccading to the central fixation cross, followed by saccading to the correct image. If the agent held the fixation cross in the center of the field of view (within a tolerance of 3.5 degrees visual angle) for a minimum of four time steps, it received a reward of 0.2. The fixation cross then disappeared and two images – drawn randomly from the ImageNet dataset (Deng et al., 2009) and resized to 34x34 – appeared on the left and right side of the display (Figure 6b). The
1611.05763#50
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
51
from the ImageNet dataset (Deng et al., 2009) and resized to 34x34 – appeared on the left and right side of the display (Figure 6b). The agent’s task was then to “select” one of the images by rotating until the center of the image aligned with the center of the visual field of view (within a tolerance of 7 degrees visual angle). Once one of the images was selected, both images disappeared and, after an intertrial interval of 10 time-steps, the fixation cross reappeared, initiating the next trial. Each episode contained a maximum of 10 trials or 3600 steps. Following Mirowski et al. (2016), we implemented an action repeat of 4, meaning that selecting an image took a minimum of three independent decisions (twelve primitive actions) after having completed the fixation. It should be noted, however, that the rotational position of the agent was not limited; that is, 360 degree rotations could occur, while the simulated computer screen only subtended 65 degrees.
1611.05763#51
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
52
Although new ImageNet images were chosen at the beginning of each episode (sampled with replacement from a set of 1000 images), the same images were re-used across all trials within an episode, though in randomly varying left-right placement, similar to the objects in Harlow’s experiment. And as in that experiment, one image was arbitrarily chosen to be the “rewarded” image throughout the episode. Selection of this image yielded a reward of 1.0, while the other image yielded a reward of -1.0. During test, the A3C learning rate was set to zero and ImageNet images were drawn from a separate held-out set of 1000, never presented during training. A grid search was conducted for optimal hyperparameters. At perfect performance, agents can complete one trial per 20-30 steps and achieve a maximum expected reward of 9 per 10 trials. Given 11 (a) Fixation (b) Image display (c) Right saccade and selection (d) Training performance (e) Robustness over random seeds (f) One-shot learning
1611.05763#52
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
53
Rewardstvial Random ° 40000 89000 120000 Episodesthread 12345678910 Tale Random 0 2% 46d BO 100 Rank Figure 6: Learning abstract task structure in visually rich 3D environment. a-c) Example of a single trial, beginning with a central fixation, followed by two images with random left-right placement. d) Average performance (measured in average reward per trial) of top 40 out of 100 seeds during training. Maximum expected performance is indicated with black dashed line. e) Performance at episode 100,000 for 100 random seeds, in decreasing order of performance. f) Probability of selecting the rewarded image, as a function of trial number for a single A3C stacked LSTM agent for a range of training durations (episodes per thread, 32 threads). the nature of the task – which requires one-shot image-reward memory together with maintenance of this information over a relatively long timescale (i.e. over fixation-cross selections and across trials) – we assessed the performance of not only a convolutional-LSTM architecture which receives reward and action as additional input (see Figure 1b and Table 1), but also a convolutional-stacked LSTM architecture used in a navigation task discussed below (see Figure 1c).
1611.05763#53
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
54
Agent performance is illustrated in Figure 6d-f. Whilst the single LSTM agent was relatively successful at solving the task, the stacked-LSTM variant exhibited much better robustness. That is, 43% of random seeds of the best hyperparameter set performed at ceiling (Figure 6e), compared to 26% of the single LSTM. Like the monkeys in Harlow’s experiment (Harlow, 1949), the networks converge on an optimal policy: Not only does the agent successfully fixate to begin each trial, but starting on the second trial of each episode it invariably selects the rewarded image, regardless of which image it selected on the first trial(Figure 6f). This reflects an impressive form of one-shot learning, which reflects an implicit understanding of the task structure: After observing one trial outcome, the agent binds a complex, unfamiliar image to a specific task role.
1611.05763#54
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
55
Further experiments, reported elsewhere (Wang et al., 2017), confirmed that the same recurrent A3C system is also able to solve a substantially more difficult version of the task. In this task, only one image – which was randomly designated to be either the rewarding item to be selected, or the unrewarding item to be avoided – was presented on every trial during an episode, with the other image presented being novel on every trial. 3.2.3 ONE-SHOT NAVIGATION The experiments using the Harlow task demonstrate the capacity of meta-RL to operate effectively within a visually rich environment, with relatively long time horizons. Here we consider related experiments recently reported within the navigation domain (Mirowski et al., 2016) (see also Jaderberg et al., 2016), and discuss how these can be recast as examples of meta-RL – attesting to the scaleability of this principle to more typical MDP settings that pose challenging RL problems due to dynamically changing sparse rewards. 12
1611.05763#55
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
57
sy Value function w i) 100 200 300 400 500 600 700 800 Time step in episode — FF A3c (87) — Nav 3c (260) 0.0 0.2 o4 0.6 08 Lo ee # (c) Performance # (d) Value Function Figure 7: a) view of I-maze showing goal object in one of the 4 alcoves b) following initial exploration (light trajectories), agent repeatedly goes to goal (blue trajectories) c) Performance of stacked LSTM (termed “Nav A3C”) and feedforward (“FF A3C”) architectures, per episode (goal = 10 points) averaged across top 5 hyperparameters. e) following initial goal discovery (goal hits marked in red), value function occurs well in advance of the agent seeing the goal which is hidden in an alcove. Figure used with permission from Mirowski et al. (2016).
1611.05763#57
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
58
Specifically, we consider a setting where the environment layout is fixed but the goal changes location randomly each episode (Figure 7; Mirowski et al., 2016). Although the layout is relatively simple, the Labyrinth environment (see for details Mirowski et al., 2016) is richer and more finely discretized (cf VizDoom), resulting in long time horizons; a trained agent takes approximately 100 steps (10 seconds) to reach the goal for the first time in a given episode. Results show that a stacked LSTM architecture (Figure 1c), that receives reward and action as additional inputs equivalent to that used in our Harlow experiment achieves near-optimal behavior – showing one-shot memory for the goal location after an initial exploratory period, followed by repeated exploitation (see Figure 7c). This is evidenced by a substantial decrease in latency to reach the goal for the first time (~100 timesteps) compared to subsequent visits (~30 timesteps). Notably, a feedforward network (see Figure 7c), that receives only a single image as observation, is unable to solve the task (i.e. no decrease in
1611.05763#58
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
59
feedforward network (see Figure 7c), that receives only a single image as observation, is unable to solve the task (i.e. no decrease in latency between successive goal rewards). Whilst not interpreted as such in Mirowski et al. (2016), this provides a clear demonstration of the effectiveness of meta-RL: a separate RL algorithm with the capability of one-shot learning emerges through training with a fixed and more incremental RL algorithm (i.e. policy gradient). Meta-RL can be viewed as allowing the agent to infer the optimal value function following initial exploration (see Figure 7d) – with the additional LSTM providing information about the currently relevant goal location to the LSTM that outputs the policy over the extended timeframe of the episode. Taken together, meta-RL allows a base model-free RL algorithm to solve a challenging RL problem that might otherwise require fundamentally different approaches (e.g. based on successor representations or fully model-based RL).
1611.05763#59
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
60
# 4 RELATED WORK We have already touched on the relationship between deep meta-RL and pioneering work by Hochre- iter et al. (2001) using recurrent networks to perform meta-learning in the setting of full supervision 13 (see also Cotter and Conwell, 1990; Prokhorov et al., 2002; Younger et al., 1999). That approach was recently extended in Santoro et al. (2016), which demonstrated the utility of leveraging an external memory structure. The idea of crossing meta-learning with reinforcement learning has been previ- ously discussed by Schmidhuber et al. (1996). That work, which appears to have introduced the term “meta-RL,” differs from ours in that it did not involve a neural network implementation. More recently, however, there has been a surge of interest in using neural networks to learn optimization procedures, using a range of innovative meta-learning techniques (Andrychowicz et al., 2016; Chen et al., 2016; Li and Malik, 2016; Zoph and Le, 2016). Recent work by Chen et al. (2016) is particularly close in spirit to the work we have presented here, and can be viewed as treating the case of “infinite bandits” using a meta-learning strategy broadly analogous to the one we have pursued.
1611.05763#60
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
61
The present research also bears a close relationship with a different body of recent work that has not been framed in terms of meta-learning. A number of studies have used deep RL to train recurrent neural networks on navigation tasks, where the structure of the task (e.g., goal location or maze configuration) varies across episodes (Jaderberg et al., 2016; Mirowski et al., 2016). The final experiment that we presented above, drawn from (Mirowski et al., 2016), is one example. To the extent that such experiments involve the key ingredients of deep meta-RL – a neural network with memory, trained through RL on a series of interrelated tasks – they are almost certain to involve the kind of meta-learning we have described in the present work. This related work provides an indication that meta-RL can be fruitfully applied to larger scale problems than the ones we have studied in our own experiments. Importantly, it indicates that a key ingredient in scaling the approach may be to incorporate memory mechanisms beyond those inherent in unstructured recurrent neural networks (see Graves et al., 2016; Mirowski et al., 2016; Santoro et al., 2016; Weston et al.,
1611.05763#61
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
63
During completion of the present research, closely related work was reported by Duan et al. (2016). Like us, Duan and colleagues use deep RL to train a recurrent network on a series of interrelated tasks, with the result that the network dynamics learn a second RL procedure which operates on a faster time-scale than the original algorithm. They compare the performance of these learned procedures against conventional RL algorithms in a number of domains, including bandits and navigation. An important difference between this parallel work and our own is the former’s primary focus on relatively unstructured task distributions (e.g., uniformly distributed bandit problems and random MDPs); our main interest, in contrast, has been in structured task distributions (e.g., dependent bandits and the task introduced by Harlow, 1949), because it is in this setting where the system can learn a biased – and therefore efficient – RL procedure that exploits regular task structure. The two perspectives are, in this regard, nicely complementary. # 5 CONCLUSION
1611.05763#63
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
64
A current challenge in artificial intelligence is to design agents that can adapt rapidly to new tasks by leveraging knowledge acquired through previous experience with related activities. In the present work we have reported initial explorations of what we believe is one promising avenue toward this goal. Deep meta-RL involves a combination of three ingredients: (1) Use of a deep RL algorithm to train a recurrent neural network, (2) a training set that includes a series of interrelated tasks, (3) network input that includes the action selected and reward received in the previous time interval. The key result, which emerges naturally from the setup rather than being specially engineered, is that the recurrent network dynamics learn to implement a second RL procedure, independent from and potentially very different from the algorithm used to train the network weights. Critically, this learned RL algorithm is tuned to the shared structure of the training tasks. In this sense, the learned algorithm builds in domain-appropriate biases, which can allow it to operate with greater efficiency than a general-purpose algorithm. This bias effect was particularly evident in the results of our experiments involving dependent bandits (sections 3.1.2 and 3.1.3),
1611.05763#64
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
65
algorithm. This bias effect was particularly evident in the results of our experiments involving dependent bandits (sections 3.1.2 and 3.1.3), where the system learned to take advantage of the task’s covariance structure; and in our study of Harlow’s animal learning task (section 3.2.2), where the recurrent network learned to exploit the task’s structure in order to display one-shot learning with complex novel stimuli.
1611.05763#65
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
66
14 One of our experiments (section 3.2.1) illustrated the point that a system trained using a model-free RL algorithm can develop behavior that emulates model-based control. A few further comments on this result are warranted. As noted in our presentation of the simulation results, the pattern of choice behavior displayed by the network has been considered in the cognitive and neuroscience literatures as reflecting model-based control or tree search. However, as has been remarked in very recent work, the same pattern can arise from a model-free system with an appropriate state representation (Akam et al., 2015). Indeed, we suspect this may be how our network in fact operates. However, other findings suggest that a more explicitly model-based control mechanism can emerge when a similar system is trained on a more diverse set of tasks. In particular, Ilin et al. (2007) showed that recurrent networks trained on random mazes can approximate dynamic programming procedures (see also Silver et al., 2017; Tamar et al., 2016). At the same time, as we have stressed, we consider it an important aspect of deep meta-RL that it yields a learned RL algorithm that capitalizes on invariances in task structure. As a result, when faced with widely varying but still structured environments, deep meta-RL seems likely to generate RL procedures that occupy a grey area between model-free and model-based RL.
1611.05763#66
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
67
The two-step decision problem studied in Section 3.2.1 was derived from neuroscience, and we believe deep meta-RL may have important implications in that arena (Wang et al., 2017). The notion of meta-RL has been discussed previously in neuroscience but only in a narrow sense, according to which meta-learning adjusts scalar hyperparameters such as the learning rate or softmax inverse temperature (Khamassi et al., 2011; 2013; Kobayashi et al., 2009; Lee and Wang, 2009; Schweighofer and Doya, 2003; Soltani et al., 2006). In recent work (Wang et al., 2017) we have shown that deep meta-RL can account for a wider range of experimental observations, providing an integrative framework for understanding the respective roles of dopamine and the prefrontal cortex in biological reinforcement learning. ACKNOWLEDGEMENTS We would like the thank the following colleagues for useful discussion and feedback: Nando de Freitas, David Silver, Koray Kavukcuoglu, Daan Wierstra, Demis Hassabis, Matt Hoffman, Piotr Mirowski, Andrea Banino, Sam Ritter, Neil Rabinowitz, Peter Dayan, Peter Battaglia, Alex Lerchner, Tim Lillicrap and Greg Wayne. # REFERENCES
1611.05763#67
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
68
# REFERENCES Thomas Akam, Rui Costa, and Peter Dayan. Simple plans or sophisticated habits? state, transition and learning interactions in the two-step task. PLoS Comput Biol, 11(12):e1004648, 2015. Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474, 2016. Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002. Timothy EJ Behrens, Mark W Woolrich, Mark E Walton, and Matthew FS Rushworth. Learning the value of information in an uncertain world. Nature neuroscience, 10(9):1214–1221, 2007. Ethan S Bromberg-Martin and Okihide Hikosaka. Midbrain dopamine neurons signal preference for advance information about upcoming rewards. Neuron, 63(1):119–126, 2009.
1611.05763#68
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
69
Yutian Chen, Matthew W Hoffman, Sergio Gomez, Misha Denil, Timothy P Lillicrap, and Nando de Freitas. Learning to learn for global optimization of black box functions. arXiv preprint arXiv:1611.03824, 2016. NE Cotter and PR Conwell. Fixed-weight networks can learn. In 1990 IJCNN International Joint Conference on Neural Networks, pages 553–559, 1990. Nathaniel D Daw, Yael Niv, and Peter Dayan. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature neuroscience, 8(12):1704–1711, 2005. Nathaniel D Daw, Samuel J Gershman, Ben Seymour, Peter Dayan, and Raymond J Dolan. Model-based influences on humans’ choices and striatal prediction errors. Neuron, 69(6):1204–1215, 2011. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009.
1611.05763#69
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
70
Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl2: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016. URL http://arxiv. 15 org/abs/1611.02779. Marcos Economides, Zeb Kurth-Nelson, Annika Lübbert, Marc Guitart-Masip, and Raymond Dolan. Model- based reasoning in humans becomes automatic with training. PLoS Computational Biology, 11(9):e1004463, 2015. John C Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society. Series B (Methodological), pages 148–177, 1979. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi´nska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016. Harry F Harlow. The formation of learning sets. Psychological review, 56(1):51, 1949.
1611.05763#70
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
71
Harry F Harlow. The formation of learning sets. Psychological review, 56(1):51, 1949. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. International Conference on Artificial Neural Networks, pages 87–94. Springer, 2001. In Roman Ilin, Robert Kozma, and Paul J Werbos. Efficient learning in cellular simultaneous recurrent neural networks-the case of maze navigation problem. In 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, pages 324–329. IEEE, 2007. Max Jaderberg, Volodymir Mnih, Wojciech Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016. URL http://arxiv.org/abs/1611.05397.
1611.05763#71
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
72
Emilie Kaufmann, Olivier Cappé, and Aurélien Garivier. On bayesian upper confidence bounds for bandit problems. In Proc. of Int’l Conf. on Artificial Intelligence and Statistics, AISTATS, 2012a. Emilie Kaufmann, Nathaniel Korda, and Rémi Munos. Thompson sampling: An asymptotically optimal finite-time analysis. In Algorithmic Learning Theory - 23rd International Conference, pages 199–213, 2012b. Mehdi Khamassi, Stéphane Lallée, Pierre Enel, Emmanuel Procyk, and Peter F Dominey. Robot cognitive control with a neurophysiologically inspired reinforcement learning model. Frontiers in neurorobotics, 5:1, 2011. Mehdi Khamassi, Pierre Enel, Peter Ford Dominey, and Emmanuel Procyk. Medial prefrontal cortex and the adaptive regulation of reinforcement learning parameters. Prog Brain Res, 202:441–464, 2013. Kunikazu Kobayashi, Hiroyuki Mizoue, Takashi Kuremoto, and Masanao Obayashi. A meta-learning method based on temporal difference error. In International Conference on Neural Information Processing, pages 530–537. Springer, 2009.
1611.05763#72
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
73
Wouter Kool, Fiery A Cushman, and Samuel J Gershman. When does model-based control pay off? PLoS Comput Biol, 12(8):e1005090, 2016. Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. arXiv preprint arXiv:1604.00289, 2016. Tor Lattimore and Rémi Munos. Bounded regret for finite-armed structured bandits. In Advances in Neural Information Processing Systems 27, pages 550–558, 2014. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015. Daeyeol Lee and Xiao-Jing Wang. Mechanisms for stochastic decision making in the primate frontal cortex: Single-neuron recording and circuit modeling. Neuroeconomics: Decision making and the brain, pages 481–501, 2009. Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016.
1611.05763#73
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
74
Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016. Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andy Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, and Raia Hadsell. Learning to navigate in complex environments. arXiv preprint arXiv:1611.03673, 2016. URL http://arxiv.org/abs/1611. 03673. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, et al. Human-level control through deep reinforcement learning. Nature, 518:529–533, 2015. Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proc. of Int’l Conf. on Machine Learning, ICML, 2016. 16
1611.05763#74
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
75
16 Danil V Prokhorov, Lee A Feldkamp, and Ivan Yu Tyukin. Adaptive behavior with fixed weights in rnn: an overview. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN), pages 2018–2023, 2002. Robert A Rescorla, Allan R Wagner, et al. A theory of pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. Classical conditioning II: Current research and theory, 2:64–99, 1972. Dan Russo and Benjamin Van Roy. Learning to optimize via information-directed sampling. In Advances in Neural Information Processing Systems 27, pages 1583–1591, 2014. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In Proceedings of The 33rd International Conference on Machine Learning, pages 1842–1850, 2016. Jurgen Schmidhuber, Jieyu Zhao, and Marco Wiering. Simple principles of metalearning. Technical report, SEE, 1996. Nicolas Schweighofer and Kenji Doya. Meta-learning in reinforcement learning. Neural Networks, 16(1):5–9, 2003.
1611.05763#75
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
76
Nicolas Schweighofer and Kenji Doya. Meta-learning in reinforcement learning. Neural Networks, 16(1):5–9, 2003. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587): 484–489, 2016. David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz, Andre Barreto, and Thomas Degris. The predictron: End-to-end learning and planning. Submitted to Int’l Conference on Learning Representations, ICLR, 2017. Alireza Soltani, Daeyeol Lee, and Xiao-Jing Wang. Neural mechanism for stochastic behaviour during a competitive game. Neural Networks, 19(8):1075–1090, 2006. Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
1611.05763#76
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05763
77
Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value iteration networks. arXiv preprint arXiv:1602.02867v2, 2016. William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25:285–294, 1933. Sebastian Thrun and Lorien Pratt. Learning to learn: Introduction and overview. In Learning to learn, pages 3–17. Springer, 1998. Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Joel Leibo, Hubert Soyer, Dharshan Kumaran, and Matthew Botvinick. Meta-reinforcement learning: a bridge between prefrontal and dopaminergic function. In Cosyne Abstracts, 2017. Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014. A Steven Younger, Peter R Conwell, and Neil E Cotter. Fixed-weight on-line learning. IEEE Transactions on Neural Networks, 10(2):272–283, 1999.
1611.05763#77
Learning to reinforcement learn
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.
http://arxiv.org/pdf/1611.05763
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick
cs.LG, cs.AI, stat.ML
17 pages, 7 figures, 1 table
null
cs.LG
20161117
20170123
[ { "id": "1611.01578" }, { "id": "1611.03824" }, { "id": "1611.03673" }, { "id": "1606.04474" }, { "id": "1611.02779" }, { "id": "1611.05397" }, { "id": "1602.02867" }, { "id": "1606.01885" }, { "id": "1604.00289" } ]
1611.05397
1
# korayk}@google.com } # ABSTRACT Deep reinforcement learning agents have achieved state-of-the-art results by di- rectly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by rein- forcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon ex- trinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the- art on Atari, averaging 880% expert human performance, and a challenging suite of first-person, three-dimensional Labyrinth tasks leading to a mean speedup in learning of 10 ×
1611.05397#1
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
2
× Natural and artificial agents live in a stream of sensorimotor data. At each time step t, the agent receives observations ot and executes actions at. These actions influence the future course of the sensorimotor stream. In this paper we develop agents that learn to predict and control this stream, by solving a host of reinforcement learning problems, each focusing on a distinct feature of the sensorimotor stream. Our hypothesis is that an agent that can flexibly control its future experiences will also be able to achieve any goal with which it is presented, such as maximising its future rewards. The classic reinforcement learning paradigm focuses on the maximisation of extrinsic reward. How- ever, in many interesting domains, extrinsic rewards are only rarely observed. This raises questions of what and how to learn in their absence. Even if extrinsic rewards are frequent, the sensorimotor stream contains an abundance of other possible learning targets. Traditionally, unsupervised learn- ing attempts to reconstruct these targets, such as the pixels in the current or subsequent frame. It is typically used to accelerate the acquisition of a useful representation. In contrast, our learning objective is to predict and control features of the sensorimotor stream, by treating them as pseudo- rewards for reinforcement learning. Intuitively, this set of tasks is more closely matched with the agent’s long-term goals, potentially leading to more useful representations.
1611.05397#2
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
3
Consider a baby that learns to maximise the cumulative amount of red that it observes. To correctly predict the optimal value, the baby must understand how to increase “redness” by various means, including manipulation (bringing a red object closer to the eyes); locomotion (moving in front of a red object); and communication (crying until the parents bring a red object). These behaviours are likely to recur for many other goals that the baby may subsequently encounter. No understanding of these behaviours is required to simply reconstruct the redness of current or subsequent images. Our architecture uses reinforcement learning to approximate both the optimal policy and optimal value function for many different pseudo-rewards. It also makes other auxiliary predictions that serve to focus the agent on important aspects of the task. These include the long-term goal of predicting cumulative extrinsic reward as well as short-term predictions of extrinsic reward. To learn more efficiently, our agents use an experience replay mechanism to provide additional updates ∗Joint first authors. Ordered alphabetically by first name. 1
1611.05397#3
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
5
Figure 1: Overview of the UNREAL agent. (a) The base agent is a CNN-LSTM agent trained on-policy with the A3C loss (Mnih et al., 2016). Observations, rewards, and actions are stored in a small replay buffer which encapsulates a short history of agent experience. This experience is used by auxiliary learning tasks. (b) Pixel Control – auxiliary policies Qaux are trained to maximise change in pixel intensity of different regions of the input. The agent CNN and LSTM are used for this task along with an auxiliary deconvolution network. This auxiliary control task requires the agent to learn how to control the environment. (c) Reward Prediction – given three recent frames, the network must predict the reward that will be obtained in the next unobserved timestep. This task network uses instances of the agent CNN, and is trained on reward biased sequences to remove the perceptual sparsity of rewards. (d) Value Function Replay – further training of the value function using the agent network is performed to promote faster value iteration. Further visualisation of the agent can be found in https://youtu.be/Uz-zGYrYEjA to the critics. Just as animals dream about positively or negatively rewarding events more frequently (Schacter et al., 2012), our agents preferentially replay sequences containing rewarding events.
1611.05397#5
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
6
to the critics. Just as animals dream about positively or negatively rewarding events more frequently (Schacter et al., 2012), our agents preferentially replay sequences containing rewarding events. Importantly, both the auxiliary control and auxiliary prediction tasks share the convolutional neural network and LSTM that the base agent uses to act. By using this jointly learned representation, the base agent learns to optimise extrinsic reward much faster and, in many cases, achieves better policies at the end of training. This paper brings together the state-of-the-art Asynchronous Advantage Actor-Critic (A3C) frame- work (Mnih et al., 2016), outlined in Section 2, with auxiliary control tasks and auxiliary reward tasks, defined in sections Section 3.1 and Section 3.2 respectively. These auxiliary tasks do not re- quire any extra supervision or signals from the environment than the vanilla A3C agent. The result is our UNsupervised REinforcement and Auxiliary Learning (UNREAL) agent (Section 3.4)
1611.05397#6
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
7
In Section 4 we apply our UNREAL agent to a challenging set of 3D-vision based domains known as the Labyrinth (Mnih et al., 2016), learning solely from the raw RGB pixels of a first-person view. Our agent significantly outperforms the baseline agent using vanilla A3C, even when the baseline was augmented with an unsupervised reconstruction loss, in terms of speed of learning, robustness to hyperparameters, and final performance. The result is an agent which on average achieves 87% of expert human-normalised score, compared to 54% with A3C, and on average 10 faster than A3C. Our UNREAL agent also significantly outperforms the previous state-of-the-art in the Atari domain. # 1 RELATED WORK A variety of reinforcement learning architectures have focused on learning temporal abstractions, such as options (Sutton et al., 1999b), with policies that may maximise pseudo-rewards (Konidaris & Barreto, 2009; Silver & Ciosek, 2012). The emphasis here has typically been on the development of temporal abstractions that facilitate high-level learning and planning. In contrast, our agents do not make any direct use of the pseudo-reward maximising policies that they learn (although this is 2
1611.05397#7
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
8
2 an interesting direction for future research). Instead, they are used solely as auxiliary objectives for developing a more effective representation. The Horde architecture (Sutton et al., 2011) also applied reinforcement learning to identify value functions for a multitude of distinct pseudo-rewards. However, this architecture was not used for representation learning; instead each value function was trained separately using distinct weights. The UVFA architecture (Schaul et al., 2015a) is a factored representation of a continuous set of optimal value functions, combining features of the state with an embedding of the pseudo-reward function. Initial work on UVFAs focused primarily on architectural choices and learning rules for these continuous embeddings. A pre-trained UVFA representation was successfully transferred to novel pseudo-rewards in a simple task. Similarly, the successor representation (Dayan, 1993; Barreto et al., 2016; Kulkarni et al., 2016) factors a continuous set of expected value functions for a fixed policy, by combining an expectation over features of the state with an embedding of the pseudo-reward function. Successor representa- tions have been used to transfer representations from one pseudo-reward to another (Barreto et al., 2016) or to different scales of reward (Kulkarni et al., 2016).
1611.05397#8
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
9
Another, related line of work involves learning models of the environment (Schmidhuber, 2010; Xie et al., 2015; Oh et al., 2015). Although learning environment models as auxiliary tasks could improve RL agents (e.g. Lin & Mitchell (1992); Li et al. (2015)), this has not yet been shown to work in rich visual environments. More recently, auxiliary predictions tasks have been studied in 3D reinforcement learning environ- ments. Lample & Chaplot (2016) showed that predicting internal features of the emulator, such as the presence of an enemy on the screen, is beneficial. Mirowski et al. (2016) study auxiliary prediction of depth in the context of navigation. # 2 BACKGROUND
1611.05397#9
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
10
# 2 BACKGROUND We assume the standard reinforcement learning setting where an agent interacts with an environment over a number of discrete time steps. At time t the agent receives an observation 0, along with a reward r; and produces an action a;. The agent’s state s, is a function of its experience up until time t, 5, = f(01,71, 41, -.., 04,74). The n-step return Ry... at time t is defined as the discounted sum of rewards, Ri:t+n = 0), 7/'r4i. The value function is the expected return from state s, V"™(s) = E[Reco|se = 8,7], when actions are selected accorded to a policy 7(a|s). The action- value function Q*(s,a) = E[Rz:co |S: = $,a, = a, 7] is the expected return following action a from state s.
1611.05397#10
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
11
Value-based reinforcement learning algorithms, such as Q-learning 1989), or its deep learning instantiations DQN (Mnih et al and asynchronous Q-learning (Mnih et al.|{2016), approximate the action-value function Q(s,a;@) using parameters 0, and then update parameters to minimise the mean-squared error, for example by optimising an n-step lookahead loss Lg =E [Retin +7” maxy Q(s’,a’';0—) — Q(s, a; 6)"; where @~ are previous parameters and the optimisation is with respect to 0.
1611.05397#11
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
12
Policy gradient algorithms adjust the policy to maximise the expected reward, L, = —Es. 7 [Ri:c0], using the gradient Paver Mace] = E[% log r(als)(Q"(s,a) — V(s))] e [1999ap; in practice the true value functions Q7 and V™ are substituted with approxima- tions. The Asynchronous Advantage Actor-Critic (A3C) algorithm (Mnih et al.| constructs an approximation to both the policy 7(a|s,@) and the value function V(s, 0) using parameters 0. Both policy and value are adjusted towards an n-step lookahead value, Rp-tin + Y"V (St4n41,9), using an entropy regularisation penalty, Lasc + Lyr + La — Esxn [oH (7 (s,-,9)], where Lyn = Ear [(Reten +9"V (Sr4ners8-) ~ V(s0.8))"]- In A3C many instances of the agent interact in parallel with many instances of the environment, which both accelerates and stabilises learning. The A3C agent architecture we build on uses an LSTM to jointly approximate both policy 7 and value function V, given the entire history of expe- rience as inputs (see Figure[I](a)). 3
1611.05397#12
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
13
3 # 3 AUXILIARY TASKS FOR REINFORCEMENT LEARNING In this section we incorporate auxiliary tasks into the reinforcement learning framework in order to promote faster training, more robust learning, and ultimately higher performance for our agents. Section 3.1 introduces the use of auxiliary control tasks, Section 3.2 describes the addition of reward focussed auxiliary tasks, and Section 3.4 describes the complete UNREAL agent combining these auxiliary tasks. 3.1 AUXILIARY CONTROL TASKS The auxiliary control tasks we consider are defined as additional pseudo-reward functions in the environment the agent is interacting with. We formally define an auxiliary control task c by a reward R, where function r(c) : is the space of possible states and is the space of available includes both the history of observations and rewards as well actions. The underlying state space as the state of the agent itself, i.e. the activations of the hidden units of the network. Given a set of auxiliary control tasks and let π be the agent’s policy on the base task. The overall objective is to maximise total performance across all these auxiliary tasks, Eπc[R(c) 1: ∞ Eπ[R1: arg max θ ] + λc ], ∞ c (1) # ∈C R{,, = _,
1611.05397#13
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
14
# ∈C R{,, = _, where, R(c) is the discounted return for auxiliary reward r(c), and θ is the set of parameters of π and all π(c)’s. By sharing some of the parameters of π and all π(c) the agent must balance improving its performance with respect to the global reward rt with improving performance on the auxiliary tasks. In principle, any reinforcement learning method could be applied to maximise these objectives. However, to efficiently learn to maximise many different pseudo-rewards simultaneously in par- allel from a single stream of experience, it is necessary to use off-policy reinforcement learn- ing. We focus on value-based RL methods that approximate the optimal action-values by Q- learning. Specifically, for each control task c we optimise an n-step Q-learning loss Ly = E [(Revin 47" maxar QO(s',a’,6-) — QO (s,a, 6))’|. as described in (2016). While many types of auxiliary reward functions can be defined from these quantities we focus on two specific types: # e # e
1611.05397#14
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
15
While many types of auxiliary reward functions can be defined from these quantities we focus on two specific types: # e # e Pixel changes - Changes in the perceptual stream often correspond to important events in an environment. We train agents that learn a separate policy for maximally changing the pixels in each cell of an n n non-overlapping grid placed over the input image. We refer to these auxiliary tasks as pixel control. See Section 4 for a complete description. Network features - Since the policy or value networks of an agent learn to extract task- relevant high-level features of the environment (Mnih et al., 2015; Zahavy et al., 2016; Silver et al., 2016) they can be useful quantities for the agent to learn to control. Hence, the activation of any hidden unit of the agent’s neural network can itself be an auxiliary reward. We train agents that learn a separate policy for maximally activating each of the units in a specific hidden layer. We refer to these tasks as feature control.
1611.05397#15
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
16
The Figure 1 (b) shows an A3C agent architecture augmented with a set of auxiliary pixel control tasks. In this case, the base policy π shares both the convolutional visual stream and the LSTM with n tensor Qaux the auxiliary policies. The output of the auxiliary network head is an Nact × where Qaux(a, i, j) represents the network’s current estimate of the optimal discounted expected change in cell (i, j) of the input after taking action a. We exploit the spatial nature of the auxiliary tasks by using a deconvolutional neural network to produce the auxiliary values Qaux. 3.2 AUXILIARY REWARD TASKS In addition to learning generally about the dynamics of the environment, an agent must learn to maximise the global reward stream. To learn a policy to maximise rewards, an agent requires features 4 Agent Input nav_maze_all_random_02 samples
1611.05397#16
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
17
4 Agent Input nav_maze_all_random_02 samples Figure 2: The raw RGB frame from the environment is the observation that is given as input to the agent, along with the last action and reward. This observation is shown for a sample of a maze from the nav maze all random 02 level in Labyrinth. The agent must navigate this unseen maze and pick up apples giving +1 reward and reach the goal giving +10 reward, after which it will respawn. Top down views of samples from this maze generator show the variety of mazes procedurally created. A video showing the agent playing Labyrinth levels can be viewed at https://youtu.be/Uz-zGYrYEjA that recognise states that lead to high reward and value. An agent with a good representation of rewarding states, will allow the learning of good value functions, and in turn should allow the easy learning of a policy. However, in many interesting environments reward is encountered very sparsely, meaning that it can take a long time to train feature extractors adept at recognising states which signify the onset of reward. We want to remove the perceptual sparsity of rewards and rewarding states to aid the training of an agent, but to do so in a way which does not introduce bias to the agent’s policy.
1611.05397#17
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
18
To do this, we introduce the auxiliary task of reward prediction – that of predicting the onset of immediate reward given some historical context. This task consists of processing a sequence of consecutive observations, and requiring the agent to predict the reward picked up in the subsequent unseen frame. This is similar to value learning focused on immediate reward (γ = 0). Unlike learning a value function, which is used to estimate returns and as a baseline while learning a policy, the reward predictor is not used for anything other than shaping the features of the agent. This keeps us free to bias the data distribution, therefore biasing the reward predictor and feature shaping, without biasing the value function or policy. We train the reward prediction task on sequences S; = (874%, 8r—k41,---;87—1) to predict the reward r-, and sample S, from the experience of our policy 7 in a skewed manner so as to over- represent rewarding events (presuming rewards are sparse within the environment). Specifically, we sample such that zero rewards and non-zero rewards are equally represented, i.e. the predicted probability of a non-zero reward is P(r, # 0) = 0.5. The reward prediction is trained to minimise a loss Lpp. In our experiments we use a multiclass cross-entropy classification loss across three classes (zero, positive, or negative reward), although a mean-squared error loss is also feasible.
1611.05397#18
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
19
The auxiliary reward predictions may use a different architecture to the agent’s main policy. Rather than simply “hanging” the auxiliary predictions off the LSTM, we use a simpler feedforward net- work that concatenates a stack of states Sτ after being encoded by the agent’s CNN, see Figure 1 (c). The idea is to simplify the temporal aspects of the prediction task in both the future direction (focus- ing only on immediate reward prediction rather than long-term returns) and past direction (focusing only on immediate predecessor states rather than the complete history); the features discovered in this manner is shared with the primary LSTM (via shared weights in the convolutional encoder) to enable the policy to be learned more efficiently. 3.3 EXPERIENCE REPLAY Experience replay has proven to be an effective mechanism for improving both the data efficiency and stability of deep reinforcement learning algorithms (Mnih et al., 2015). The main idea is to store transitions in a replay buffer, and then apply learning updates to sampled transitions from this buffer. Experience replay provides a natural mechanism for skewing the distribution of reward predic- tion samples towards rewarding events: we simply split the replay buffer into rewarding and non- rewarding subsets, and replay equally from both subsets. The skewed sampling of transitions from 5
1611.05397#19
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
20
5 a replay buffer means that rare rewarding states will be oversampled, and learnt from far more fre- quently than if we sampled sequences directly from the behaviour policy. This approach can be viewed as a simple form of prioritised replay (Schaul et al., 2015b). In addition to reward prediction, we also use the replay buffer to perform value function replay. This amounts to resampling recent historical sequences from the behaviour policy distribution and performing extra value function regression in addition to the on-policy value function regression in A3C. By resampling previous experience, and randomly varying the temporal position of the truncation window over which the n-step return is computed, value function replay performs value iteration and exploits newly discovered features shaped by reward prediction. We do not skew the distribution for this case. Experience replay is also used to increase the efficiency and stability of the auxiliary control tasks. Q-learning updates are applied to sampled experiences that are drawn from the replay buffer, allow- ing features to be developed extremely efficiently. 3.4 UNREAL AGENT
1611.05397#20
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
21
3.4 UNREAL AGENT The UNREAL algorithm combines the benefits of two separate, state-of-the-art approaches to deep reinforcement learning. The primary policy is trained with A3C (Mnih et al., 2016): it learns from parallel streams of experience to gain efficiency and stability; it is updated online using policy gra- dient methods; and it uses a recurrent neural network to encode the complete history of experience. This allows the agent to learn effectively in partially observed environments. The auxiliary tasks are trained on very recent sequences of experience that are stored and randomly sampled; these sequences may be prioritised (in our case according to immediate rewards) (Schaul et al., 2015b); these targets are trained off-policy by Q-learning; and they may use simpler feedfor- ward architectures. This allows the representation to be trained with maximum efficiency. The UNREAL algorithm optimises a single combined loss function with respect to the joint param- eters of the agent, θ, that combines the A3C loss LPC, auxiliary reward prediction loss LRP and replayed value loss LA3C + λVRLVR + λPC Lvyr, (c) UNREAL(θ) = Q + λRPLRP L L c (2)
1611.05397#21
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
22
Lvyr, (c) UNREAL(θ) = Q + λRPLRP L L c (2) where λVR, λPC, λRP are weighting terms on the individual loss components. In practice, the loss is broken down into separate components that are applied either on-policy, LA3C is directly from experience; or off-policy, on replayed transitions. Specifically, the A3C loss LVR is optimised from replayed data, in addition minimised on-policy; while the value function loss to the A3C loss (of which it is one component, see Section 2). The auxiliary control loss LPC is optimised off-policy from replayed data, by n-step Q-learning. Finally, the reward loss LRP is optimised from rebalanced replay data. # 4 EXPERIMENTS In this section we give the results of experiments performed on the 3D environment Labyrinth in Section 4.1 and Atari in Section 4.2.
1611.05397#22
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
23
# 4 EXPERIMENTS In this section we give the results of experiments performed on the 3D environment Labyrinth in Section 4.1 and Atari in Section 4.2. In all our experiments we used an A3C CNN-LSTM agent as our baseline and the UNREAL agent along with its ablated variants added auxiliary outputs and losses to this base agent. The agent is trained on-policy with 20-step returns and the auxiliary tasks are performed every 20 environment steps, corresponding to every update of the base A3C agent. The replay buffer stores the most recent 2k observations, actions, and rewards taken by the base agent. In Labyrinth we use the same set of 17 discrete actions for all games and on Atari the action set is game dependent (between 3 and 18 discrete actions). The full implementation details can be found in Section B. 4.1 LABYRINTH RESULTS Labyrinth is a first-person 3D game platform extended from OpenArena (contributors, 2005), which is itself based on Quake3 (id software, 1999). Labyrinth is comparable to other first-person 3D game 6 Labyrinth Performance # Labyrinth Robustness Atari Performance Atari Robustness
1611.05397#23
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
24
6 Labyrinth Performance # Labyrinth Robustness Atari Performance Atari Robustness Figure 3: An overview of performance averaged across all levels on Labyrinth (Top) and Atari (Bottom). In the ablated versions RP is reward prediction, VR is value function replay, and PC is pixel control, with the UNREAL agent being the combination of all. Left: The mean human-normalised performance over last 100 episodes of the top-3 jobs at every point in training. We achieve an average of 87% human-normalised score, with every element of the agent improving upon the 54% human-normalised score of vanilla A3C. Right: The final human-normalised score of every job in our hyperparameter sweep, sorted by score. On both Labyrinth and Atari, the UNREAL agent increases the robustness to the hyperparameters (namely learning rate and entropy cost).
1611.05397#24
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
25
platforms for AI research like VizDoom (Kempka et al., 2016) or Minecraft (Tessler et al., 2016). However, in comparison, Labyrinth has considerably richer visuals and more realistic physics. Tex- tures in Labyrinth are often dynamic (animated) so as to convey a game world where walls and floors shimmer and pulse, adding significant complexity to the perceptual task. The action space allows for fine-grained pointing in a fully 3D world. Unlike in VizDoom, agents can look up to the sky or down to the ground. Labyrinth also supports continuous motion unlike the Minecraft platform of (Oh et al., 2016), which is a 3D grid world. We evaluated agent performance on 13 Labyrinth levels that tested a range of different agent abilities. A top-down visualization showing the layout of each level can be found in Figure 7 of the Appendix. A gallery of example images from the first-person perspective of the agent are in Figure 8 of the Appendix. The levels can be divided into four categories:
1611.05397#25
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
26
and a stairway to melon 01). The goal of these levels is to collect apples (small positive reward) and melons (large positive reward) while avoiding lemons (small negative reward). and 2. Navigation levels with a 1, 2, 3 { the agent’s ability to find ). nav maze random goal 0 1, 2, 3 } { their way to a goal in a fixed maze that remains the same across episodes. The starting location is random. In this case, agents could encode the structure of the maze in network weights. In the random goal variant, the location of the goal changes in every episode. The optimal policy is to find the goal’s location at the start of each episode and then use long-term knowledge of the maze layout to return to it as quickly as possible from any location. The static variant is simpler in that the goal location is always fixed for all episodes and only the agent’s starting location changes so the optimal policy does not require the first step of exploring to find the current goal location. 3. Procedurally-generated navigation levels requiring effective exploration of a new maze ). These generated on-the-fly at the start of each episode (nav maze all random 0 1, 2, 3 } { levels test the agent’s ability to effectively explore a totally new environment. The optimal 7
1611.05397#26
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
27
7 policy would begin by exploring the maze to rapidly learn its layout and then exploit that knowledge to repeatedly return to the goal as many times as possible before the end of the episode (between 60 and 300 seconds). 4. Laser-tag levels requiring agents to wield laser-like science fiction gadgets to tag bots con- trolled by the game’s in-built AI (lt horse shoe color and lt hallway slope). A reward of 1 is delivered whenever the agent tags a bot by reducing its shield to 0. These levels approximate the default OpenArena/Quake3 gameplay mode. In lt hallway slope there is a sloped arena, requiring the agent to look up and down. In lt horse shoe color, the colors and textures of the bots are randomly generated at the start of each episode. This prevents agents from relying on color for bot detection. These levels test aspects of fine-control (for aiming), planning (to anticipate where bots are likely to move), strategy (to control key areas of the map such as gadget spawn points), and robustness to the substantial vi- sual complexity arising from the large numbers of independently moving objects (gadget projectiles and bots). 4.1.1 RESULTS
1611.05397#27
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
28
4.1.1 RESULTS We compared the full UNREAL agent to a basic A3C LSTM agent along with several ablated versions of UNREAL with different components turned off. A video of the final agent perfor- mance, as well as visualisations of the activations and auxiliary task outputs can be viewed at https://youtu.be/Uz-zGYrYEjA. Figure 3 (right) shows curves of mean human-normalised scores over the 13 Labyrinth levels. Adding each of our proposed auxiliary tasks to an A3C agent substantially improves the perfor- mance. Combining different auxiliary tasks leads to further improvements over the individual auxil- iary tasks. The UNREAL agent, which combines all three auxiliary tasks, achieves more than twice the final human-normalised mean performance of A3C, increasing from 54% to 87% (45% to 92% for median performance). This includes a human-normalised score of 116% on lt hallway slope and 100% on nav maze random goal 02.
1611.05397#28
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
29
Perhaps of equal importance, aside from final performance on the games, UNREAL is significantly faster at learning and therefore more data efficient, achieving a mean speedup of the number of steps to reach A3C best performance of 10 on nav maze random goal 02. This translates in a drastic improvement in the data efficiency of UN- REAL over A3C, requiring less than 10% of the data to reach the final performance of A3C. We can also measure the robustness of our learning algorithms to hyperparameters by measuring the perfor- mance over all hyperparameters (namely learning rate and entropy cost). This is shown in Figure 3 Top: every auxiliary task in our agent improves robustness. A breakdown of the performance of A3C, UNREAL and UNREAL without pixel control on the individual Labyrinth levels is shown in Figure 4.
1611.05397#29
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
30
Unsupervised Reinforcement Learning In order to better understand the benefits of auxiliary control tasks we compared it to two simple baselines on three Labyrinth levels. The first baseline was A3C augmented with a pixel reconstruction loss, which has been shown to improve performance on 3D environments (Kulkarni et al., 2016). The second baseline was A3C augmented with an input change prediction loss, which can be seen as simply predicting the immediate auxiliary reward instead of learning to control. Finally, we include preliminary results for A3C augmented with the feature control auxiliary task on one of the levels. We retuned the hyperparameters of all methods (including learning rate and the weight placed on the auxiliary loss) for each of the three Labyrinth levels. Figure 5 shows the learning curves for the top 5 hyperparameter settings on three Labyrinth navigation levels. The results show that learning to control pixel changes is indeed better than simply predicting immediate pixel changes, which in turn is better than simply learning to reconstruct the input. In fact, learning to reconstruct only led to faster initial learning and actually made the final scores worse when compared to vanilla A3C. Our hypothesis is that input reconstruction
1611.05397#30
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
31
only led to faster initial learning and actually made the final scores worse when compared to vanilla A3C. Our hypothesis is that input reconstruction hurts final performance because it puts too much focus on reconstructing irrelevant parts of the visual input instead of visual cues for rewards, which rewarding objects are rarely visible. Encouragingly, we saw an improvement from including the feature control auxiliary task. Combining feature control with other auxiliary tasks is a promising future direction.
1611.05397#31
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
32
8 AUC Performance Data Efficiency TopS Speedup It hallway slope 27% 70% Bx It horse. shoe color % Ess, % fenestens Poin. oz nav_maze all random.01 lll 71% ss. es Bs. 251% ier ig ‘nav.maze all random_02 B70% a Bix a 3x a Bix Ex mm | is fmm 30% 7% nay maze_all_random_03 _ -maze.allrandom 4 039% nav_maze random goal 01 02% ‘ | nav maze random goal 02 Bihices, 509% ¥ nav maze random goal 15% = dom goal 03 15% nav.maze static 01 88% Bx 22% : nav maze static 02 assy, " 210% 2 nav maze static.03 Men’ a. cess sr, Tx seekavoid arena O1 o be stsiray-tosmelon 0 ts Mean oom foe 243% 3x Median 7% | 210% Be # UNREAL # ABC+RP+VR
1611.05397#32
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
33
# UNREAL # ABC+RP+VR Figure 4: A breakdown of the improvement over A3C due to our auxiliary tasks for each level on Labyrinth. The values for A3C+RP+VR (reward prediction and value function replay) and UNREAL (reward prediction, value function replay and pixel control) are normalised by the A3C value. AUC Performance gives the robust- ness to hyperparameters (area under the robustness curve Figure 3 Right). Data Efficiency is area under the mean learning curve for the top-5 jobs, and Top5 Speedup is the speedup for the mean of the top-5 jobs to reach the maximum top-5 mean score set by A3C. Speedup is not defined for stairway to melon as A3C did not learn throughout training. ray_maze random, goal o1 nay_maze_all random 01 — asc — A3C + Input reconstruction — A3C + Input change prediction ABC + Pixel Contral — A3C + Feature Control A3C + Pixel Control Figure 5: Comparison of various forms of self-supervised learning on random maze navigation. Adding an input reconstruction loss to the objective leads to faster learning compared to an A3C baseline. Predicting changes in the inputs works better than simple image reconstruction. Learning to control changes leads to the best results.
1611.05397#33
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
34
4.2 ATARI We applied the UNREAL agent as well as UNREAL without pixel control to 57 Atari games from the Arcade Learning Environment (Bellemare et al., 2012) domain. We use the same evaluation protocol as for our Labyrinth experiments where we evaluate 50 different random hyper parameter settings (learning rate and entropy cost) on each game. The results are shown in the bottom row of Figure 3. The left side shows the average performance curves of the top 3 agents for all three meth- ods the right half shows sorted average human-normalised scores for each hyperparameter setting. More detailed learning curves for individual levels can be found in Figure 7. We see that UNREAL surpasses the current state-of-the-art agents, i.e. A3C and Prioritized Dueling DQN (Wang et al., 2016), across all levels attaining 880% mean and 250% median performance. Notably, UNREAL is also substantially more robust to hyper parameter settings than A3C. # 5 CONCLUSION
1611.05397#34
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
35
# 5 CONCLUSION We have shown how augmenting a deep reinforcement learning agent with auxiliary control and re- ward prediction tasks can drastically improve both data efficiency and robustness to hyperparameter settings. Most notably, our proposed UNREAL architecture more than doubled the previous state- of-the-art results on the challenging set of 3D Labyrinth levels, bringing the average scores to over 87% of human scores. The same UNREAL architecture also significantly improved both the learning speed and the robustness of A3C over 57 Atari games. 9 # ACKNOWLEDGEMENTS We thank Charles Beattie, Julian Schrittwieser, Marcus Wainwright, and Stig Petersen for environ- ment design and development, and Amir Sadik and Sarah York for expert human game testing. We also thank Joseph Modayil, Andrea Banino, Hubert Soyer, Razvan Pascanu, and Raia Hadsell for many helpful discussions. # REFERENCES Andr´e Barreto, R´emi Munos, Tom Schaul, and David Silver. Successor features for transfer in reinforcement learning. arXiv preprint arXiv:1606.05312, 2016.
1611.05397#35
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
36
Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning envi- ronment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 2012. # OpenArena contributors. The openarena manual. 2005. URL http://openarena.wikia. com/wiki/Manual. Peter Dayan. Improving generalization for temporal difference learning: The successor representa- tion. Neural Computation, 5(4):613–624, 1993. Felix A Gers, J¨urgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with lstm. Neural computation, 12(10):2451–2471, 2000. id software. Quake3. 1999. URL https://github.com/id-Software/ Quake-III-Arena. Michał Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja´skowski. Viz- doom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097, 2016.
1611.05397#36
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
37
George Konidaris and Andre S Barreto. Skill discovery in continuous reinforcement learning do- mains using skill chaining. In Advances in Neural Information Processing Systems, pp. 1015– 1023, 2009. Tejas D Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J Gershman. Deep successor reinforcement learning. arXiv preprint arXiv:1606.02396, 2016. Guillaume Lample and Devendra Singh Chaplot. Playing FPS games with deep reinforcement learn- ing. CoRR, abs/1609.05521, 2016. Xiujun Li, Lihong Li, Jianfeng Gao, Xiaodong He, Jianshu Chen, Li Deng, and Ji He. Recurrent reinforcement learning: A hybrid approach. arXiv preprint arXiv:1509.03044, 2015. Long-Ji Lin and Tom M Mitchell. Memory approaches to reinforcement learning in non-markovian domains. Technical report, Carnegie Mellon University, School of Computer Science, 1992.
1611.05397#37
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
38
Long-Ji Lin and Tom M Mitchell. Memory approaches to reinforcement learning in non-markovian domains. Technical report, Carnegie Mellon University, School of Computer Science, 1992. Piotr Mirowski, Razvan Pascanu, Fabio Viola, Andrea Banino, Hubert Soyer, Andy Ballard, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, and Raia Hadsell. Learning to navigate in complex environments. 2016. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In NIPS Deep Learn- ing Workshop. 2013.
1611.05397#38
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
39
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Pe- tersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 02 2015. URL http://dx.doi.org/10.1038/ nature14236. 10 Volodymyr Mnih, Adri`a Puigdom`enech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML), pp. 1928–1937, 2016. Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video prediction using deep networks in atari games. In Advances in Neural Information Process- ing Systems, pp. 2863–2871, 2015.
1611.05397#39
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
40
Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception, and action in minecraft. arXiv preprint arXiv:1605.09128, 2016. Jing Peng and Ronald J Williams. Incremental multi-step q-learning. Machine Learning, 22(1-3): 283–290, 1996. Daniel L Schacter, Donna Rose Addis, Demis Hassabis, Victoria C Martin, R Nathan Spreng, and Karl K Szpunar. The future of memory: remembering, imagining, and the brain. Neuron, 76(4): 677–694, 2012. Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approxima- tors. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1312–1320, 2015a. Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015b. J¨urgen Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE Transactions on Autonomous Mental Development, 2(3):230–247, 2010.
1611.05397#40
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
41
David Silver and Kamil Ciosek. Compositional planning using optimal option models. arXiv preprint arXiv:1206.6473, 2012. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016. Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057– 1063, 1999a. Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 1999b.
1611.05397#41
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
42
Richard S Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M Pilarski, Adam White, and Doina Precup. Horde: A scalable real-time architecture for learning knowledge from unsuper- vised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 761–768. International Foundation for Autonomous Agents and Multiagent Systems, 2011. Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierar- chical approach to lifelong learning in minecraft. arXiv preprint arXiv:1604.07255, 2016. Z. Wang, N. de Freitas, and M. Lanctot. Dueling Network Architectures for Deep Reinforcement Learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016. Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989. Christopher Xie, Sachin Patil, Teodor Mihai Moldovan, Sergey Levine, and Pieter Abbeel. Model- based reinforcement learning with parametrized physical models and optimism-driven explo- ration. CoRR, abs/1509.06824, 2015.
1611.05397#42
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
45
# B IMPLEMENTATION DETAILS The input to the agent at each timestep was an 84 84 RGB image. All agents processed the input with the convolutional neural network (CNN) originally used for Atari by Mnih et al. (2013). The network consists of two convolutional layers. The first one has 16 8 8 filters applied with stride 4, while the second one has 32 4 4 filters with stride 2. This is followed by a fully connected layer with 256 units. All three layers are followed by a ReLU non-linearity. All agents used an LSTM with forget gates (Gers et al., 2000) with 256 cells which take in the CNN-encoded observation concatenated with the previous action taken and curren:t reward. The policy and value function are linear projections of the LSTM output. The agent is trained with 20-step unrolls. The action space of the agent in the environment is game dependent for Atari (between 3 and 18 discrete actions), and 17 discrete actions for Labyrinth. Labyrinth runs at 60 frames-per-second. We use an action repeat of four, meaning that each action is repeated four times, with the agent receiving the final fourth frame as input to the next processing step.
1611.05397#45
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
46
80 crop of the For the pixel control auxiliary tasks we trained policies to control the central 80 inputs. The cropped region was subdivided into a 20 4 cells. The instantaneous reward in each cell was defined as the average absolute difference from the previous frame, where the average is taken over both pixels and channels in the cell. The output tensor of auxiliary values, Qaux, is produced from the LSTM outputs by a deconvolutional network. The 7 spatial feature map with a linear layer followed by a LSTM outputs are first mapped to a 32 ReLU. Deconvolution layers with 1 and Nact filters of size 4 7 into a value tensor and an advantage tensor respectively. The spatial map is then decoded into Q-values using the dueling parametrization (Wang et al., 2016) producing the Nact × The architecture for feature control was similar. We learned to control the second hidden layer, which is a spatial feature map with size 32 9. Similarly to pixel control, we exploit the spatial structure in the data and used a deconvolutional network to produce Qaux from the LSTM outputs. Further details are included in the supplementary materials. ×
1611.05397#46
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
47
× The reward prediction task is performed on a sequence of three observations, which are fed through three instances of the agent’s CNN. The three encoded CNN outputs are concatenated and fed through a fully connected layer of 128 units with ReLU activations, followed by a final linear three- class classifier and softmax. The reward is predicted as one of three classes: positive, negative, or zero and trained with a task weight λRP = 1. The value function replay is performed on a sequence of length 20 with a task weight λVR = 1. The auxiliary tasks are performed every 20 environment steps, corresponding to every update of the base A3C agent, once the replay buffer has filled with agent experience. The replay buffer stores the most recent 2k observations, actions, and rewards taken by the base agent.
1611.05397#47
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
48
The agents are optimised over 32 asynchronous threads with shared RMSprop (Mnih et al., 2016). The learning rates are sampled from a log-uniform distribution between 0.0001 and 0.005. The entropy costs are sampled from the log-uniform distribution between 0.0005 and 0.01. Task weight λPC is sampled from log-uniform distribution between 0.01 and 0.1 for Labyrinth and 0.0001 and 0.01 for Atari (since Atari games are not homogeneous in terms of pixel intensities changes, thus we need to fit this normalization factor). 12 C LABYRINTH LEVELS +10 Melon -ILemon Agent +1 Apple
1611.05397#48
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.05397
49
# stairway_to_melon stairway to melon seekavoid arena 01 Agent TLAPPIC 16 Goal Agent +1Apple +10Goal # nav maze 01 nav maze ∗ # nav maze ∗ 02 nav maze 03 lt horse shoe color +10 Goal +1 Apple Agente ∗ “agent Powte-ups # lt_hallway_slope levels show Figure 7: Top-down renderings of each Labyrinth level. The nav maze one example maze layout. In the all random case, a new maze was randomly generated at the start of each episode. 13 stairway_to_melon nav_maze*01 Figure 8: Example images from the agent’s egocentric viewpoint for each Labyrinth level. 14
1611.05397#49
Reinforcement Learning with Unsupervised Auxiliary Tasks
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.
http://arxiv.org/pdf/1611.05397
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
cs.LG, cs.NE
null
null
cs.LG
20161116
20161116
[ { "id": "1605.02097" }, { "id": "1604.07255" }, { "id": "1606.02396" }, { "id": "1605.09128" }, { "id": "1606.05312" }, { "id": "1511.05952" }, { "id": "1509.03044" } ]
1611.03673
0
7 1 0 2 n a J 3 1 ] I A . s c [ 3 v 3 7 6 3 0 . 1 1 6 1 : v i X r a Under review as a conference paper at ICLR 2017 # LEARNING TO NAVIGATE IN COMPLEX ENVIRONMENTS Piotr Mirowski∗, Razvan Pascanu∗, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell # DeepMind London, UK {piotrmirowski, razp, fviola, soyer, aybd, abanino, mdenil, goroshin, sifre, korayk, dkumaran, raia} @google.com # ABSTRACT
1611.03673#0
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
0
7 1 0 2 n u J 2 1 ] L M . t a t s [ 6 v 4 2 8 3 0 . 1 1 6 1 : v i X r a # Learning to Learn without Gradient Descent by Gradient Descent # Yutian Chen 1 Matthew W. Hoffman 1 Sergio G´omez Colmenarejo 1 Misha Denil 1 Timothy P. Lillicrap 1 Matt Botvinick 1 Nando de Freitas 1 Abstract We learn recurrent neural network optimizers trained on simple synthetic functions by gradi- ent descent. We show that these learned optimiz- ers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, in- cluding Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the train- ing horizon, the learned optimizers learn to trade- off exploration and exploitation, and compare favourably with heavily engineered Bayesian op- timization packages for hyper-parameter tuning. # 1. Introduction
1611.03824#0
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
1
# ABSTRACT Learning to navigate in complex environments with dynamic elements is an impor- tant milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour1, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities. # INTRODUCTION
1611.03673#1
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
1
# 1. Introduction Findings in developmental psychology have revealed that infants are endowed with a small number of separable systems of core knowledge for reasoning about objects, actions, number, space, and possibly social interactions (Spelke and Kinzler, 2007). These systems enable infants to learn many skills and acquire knowledge rapidly. The most coherent explanation of this phenomenon is that the learning (or optimization) process of evolution has led to the emergence of components that enable fast and varied forms of learning. In psychology, learning to learn has a long history (Ward, 1937; Harlow, 1949; Kehoe, 1988). Inspired by this, many researchers have attempted to build agents capable of learning to learn (Schmidhuber, 1987; Naik and Mammone, 1992; Thrun and Pratt, 1998; Hochre- iter et al., 2001; Santoro et al., 2016; Duan et al., 2016; Wang et al., 2016; Ravi and Larochelle, 2017; Li and Ma- lik, 2017). The scope of research under the umbrella of learning to learn is very broad. The learner can implement and be trained by many different algorithms, including gra1DeepMind, London, United Kingdom. Correspondence to: Yutian Chen <[email protected]>. Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s).
1611.03824#1
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
2
# INTRODUCTION The ability to navigate efficiently within an environment is fundamental to intelligent behavior. Whilst conventional robotics methods, such as Simultaneous Localisation and Mapping (SLAM), tackle navigation through an explicit focus on position inference and mapping (Dissanayake et al., 2001), here we follow recent work in deep reinforcement learning (Mnih et al., 2015; 2016) and propose that navigational abilities could emerge as the by-product of an agent learning a policy that maximizes reward. One advantage of an intrinsic, end-to-end approach is that actions are not divorced from representation, but rather learnt together, thus ensuring that task-relevant features are present in the representation. Learning to navigate from reinforcement learning in partially observable environments, however, poses several challenges. First, rewards are often sparsely distributed in the environment, where there may be only one goal location. Second, environments often comprise dynamic elements, requiring the agent to use memory at different timescales: rapid one-shot memory for the goal location, together with short term memory subserving temporal integration of velocity signals and visual observations, and longer term memory for constant aspects of the environment (e.g. boundaries, cues).
1611.03673#2
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
2
Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). dient descent, evolutionary strategies, simulated annealing, and reinforcement learning. For instance, one can learn to learn by gradient descent by gradient descent, or learn local Hebbian updates by gra- dient descent (Andrychowicz et al., 2016; Bengio et al., 1992). In the former, one uses supervised learning at the meta-level to learn an algorithm for supervised learning, while in the latter, one uses supervised learning at the meta- level to learn an algorithm for unsupervised learning. Learning to learn can be used to learn both models and algorithms. In Andrychowicz et al. (2016) the output of meta-learning is a trained recurrent neural network (RNN), which is subsequently used as an optimization algorithm to fit other models to data. In contrast, in Zoph and Le (2017) the output of meta-learning can also be an RNN model, but this new RNN is subsequently used as a model that is fit to data using a classical optimizer. In both cases the output of meta-learning is an RNN, but this RNN is interpreted and applied as a model or as an algorithm. In this sense, learning to learn with neural networks blurs the classical distinction between models and algorithms.
1611.03824#2
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
3
To improve statistical efficiency we bootstrap the reinforcement learning procedure by augmenting our loss with auxiliary tasks that provide denser training signals that support navigation-relevant representation learning. We consider two additional losses: the first one involves reconstruction of a low-dimensional depth map at each time step by predicting one input modality (the depth channel) from others (the colour channels). This auxiliary task concerns the 3D geometry of the environment, and is aimed to encourage the learning of representations that aid obstacle avoidance and short-term trajectory planning. The second task directly invokes loop closure from SLAM: the agent is trained to predict if the current location has been previously visited within a local trajectory. ∗Denotes equal contribution 1A video illustrating the navigation agents is available at: https://youtu.be/JL8F82qUG-Q 1 # Under review as a conference paper at ICLR 2017
1611.03673#3
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
3
In this work, the goal of meta-learning is to produce an algorithm for global black-box optimization. Specifically, we address the problem of finding a global minimizer of an unknown (black-box) loss function f. That is, we wish to compute x* = arg min,¢y f(x), where ¥ is some search space of interest. The black-box function f is not availab to the learner in simple closed form at test time, but can b evaluated at a query point x in the domain. This evaluation produces either deterministic or stochastic outputs y € R such that f(x) = Ely | f(x)]. In other words, we can only observe the function f through unbiased noisy point-wise observations y. ie re Bayesian optimization is one of the most popular black-box optimization methods (Brochu et al., 2009; Snoek et al., 2012; Shahriari et al., 2016). It is a sequential model- based decision making approach with two components. The first component is a probabilistic model, consisting of a prior distribution that captures our beliefs about the be- havior of the unknown objective function and an observa- tion model that describes the data generation mechanism. Learning to Learn without Gradient Descent by Gradient Descent
1611.03824#3
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
4
Figure 1: Views from a small 5 × 10 maze, a large 9 × 15 maze and an I-maze, with corresponding maze layouts and sample agent trajectories. The mazes, which will be made public, have different textures and visual cues as well as exploration rewards and goals (shown right). To address the memory requirements of the task we rely on a stacked LSTM architecture (Graves et al., 2013; Pascanu et al., 2013). We evaluate our approach using five 3D maze environments and demonstrate the accelerated learning and increased performance of the proposed agent architecture. These environments feature complex geometry, random start position and orientation, dynamic goal locations, and long episodes that require thousands of agent steps (see Figure 1). We also provide detailed analysis of the trained agent to show that critical navigation skills are acquired. This is important as neither position inference nor mapping are directly part of the loss; therefore, raw performance on the goal finding task is not necessarily a good indication that these skills are acquired. In particular, we show that the proposed agent resolves ambiguous observations and quickly localizes itself in a complex maze, and that this localization capability is correlated with higher task reward. # 2 APPROACH
1611.03673#4
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
4
Learning to Learn without Gradient Descent by Gradient Descent The model can be a Beta-Bernoulli bandit, a random for- est, a Bayesian neural network, or a Gaussian process (GP) (Shahriari et al., 2016). Bayesian optimization is however often associated with GPs, to the point of sometimes being referred to as GP bandits (Srinivas et al., 2010). # 2. Learning Black-box Optimization A black-box optimization algorithm can be summarized by the following loop: The second component is an acquisition function, which is optimized at each step so as to trade-off exploration and exploitation. Here again we encounter a huge variety of strategies, including Thompson sampling, information gain, probability of improvement, expected improvement, upper confidence bounds (Shahriari et al., 2016). The re- quirement for optimizing the acquisition function at each step can be a significant cost, as shown in the empirical sec- tion of this paper. It also raises some theoretical concerns (Wang et al., 2014). 1. Given the current state of knowledge ht propose a query point xt 2. Observe the response yt 3. Update any internal statistics to produce ht+1
1611.03824#4
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
5
# 2 APPROACH We rely on a end-to-end learning framework that incorporates multiple objectives. Firstly it tries to maximize cumulative reward using an actor-critic approach. Secondly it minimizes an auxiliary loss of inferring the depth map from the RGB observation. Finally, the agent is trained to detect loop closures as an additional auxiliary task that encourages implicit velocity integration. The reinforcement learning problem is addressed with the Asynchronous Advantage Actor-Critic (A3C) algorithm (Mnih et al., 2016) that relies on learning both a policy π(at|st; θ) and value function V (st; θV ) given a state observation st. Both the policy and value function share all intermediate representations, both being computed using a separate linear layer from the topmost layer of the model. The agent setup closely follows the work of (Mnih et al., 2016) and we refer to this work for the details (e.g. the use of a convolutional encoder followed by either an MLP or an LSTM, the use of action repetition, entropy regularization to prevent the policy saturation, etc.). These details can also be found in the Appendix B.
1611.03673#5
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
5
1. Given the current state of knowledge ht propose a query point xt 2. Observe the response yt 3. Update any internal statistics to produce ht+1 This easily maps onto the classical frameworks presented in the previous section where the update step computes statis- tics and the query step uses these statistics for exploration. In this work we take this framework as a starting point and define a combined update and query rule using a recurrent neural network parameterized by θ such that In this paper, we present a learning to learn approach for global optimization of black-box functions and contrast it with Bayesian optimization. In the meta-learning phase, we use a large number of differentiable functions gener- ated with a GP to train RNN optimizers by gradient de- scent. We consider two types of RNN: long-short-term memory networks (LSTMs) by Hochreiter and Schmidhu- ber (1997) and differentiable neural computers (DNCs) by Graves et al. (2016).
1611.03824#5
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
6
The baseline that we consider in this work is an A3C agent (Mnih et al., 2016) that receives only RGB input from the environment, using either a recurrent or a purely feed-forward model (see Figure 2a,b). The encoder for the RGB input (used in all other considered architectures) is a 3 layer convolutional network. To support the navigation capability of our approach, we also rely on the Nav A3C agent (Figure 2c) which employs a two-layer stacked LSTM after the convolutional encoder. We expand the observations of the agents to include agent-relative velocity, the action sampled from the stochastic policy and the immediate reward, from the previous time step. We opt to feed the velocity and previously selected action directly to the second recurrent layer, with the first layer only receiving the reward. We postulate that the first layer might be able to make associations between reward and visual observations that are provided as context to the second layer from which the policy is computed. Thus, the observation st may include an image xt ∈ R3 H (where W and H are the width and × 2 # Under review as a conference paper at ICLR 2017
1611.03673#6
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
6
During meta-learning, we choose the horizon (number of steps) of the optimization process. We are therefore con- sidering the finite horizon setting that is popular in AB tests (Kohavi et al., 2009; Scott, 2010) and is often studied un- der the umbrella of best arm identification in the bandits literature (Bubeck et al., 2009; Gabillon et al., 2012). The RNN optimizer learns to use its memory to store in- formation about previous queries and function evaluations, and learns to access its memory to make decisions about which parts of the domain to explore or exploit next. That is, by unrolling the RNN, we generate new candidates for the search process. The experiments will show that this process is much faster than applying standard Bayesian op- timization, and in particular it does not involve either ma- trix inversion or optimization of acquisition functions. In the experiments we also investigate distillation of acqui- sition functions to guide the process of training the RNN optimizers, and the use of parallel optimization schemes for expensive training of deep networks.
1611.03824#6
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
7
2 # Under review as a conference paper at ICLR 2017 y v v v Loop Depth a ud a v va VO Vv wD.) Cc G G — ttt Depth | e 0) e l enc enc enc enc x, x, x, Ty tv, ai} x, Ty {v5 a} a. FF A3C b. LSTM A3C c. Nav A3C d. Nav A3C +D,D,L Figure 2: Different architectures: (a) is a convolutional encoder followed by a feedforward layer and policy (π) and value function outputs; (b) has an LSTM layer; (c) uses additional inputs (agent-relative velocity, reward, and action), as well as a stacked LSTM; and (d) has additional outputs to predict depth and loop closures. height of the image), the agent-relative lateral and rotational velocity vt ∈ R6, the previous action at # 1 ∈ RNA , and the previous reward rt − −
1611.03673#7
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
7
The experiments show that the learned optimizers can transfer to optimize a large and diverse set of black- box functions arising in global optimization, control, and hyper-parameter tuning. Moreover, withing the training horizon, the RNN optimizers are competitive with state- of-the-art heavily engineered packages such as Spearmint, SMAC and TPE (Snoek et al., 2014; Hutter et al., 2011a; Bergstra et al., 2011) ht, xt = RNNθ(ht−1, xt−1, yt−1), (1) yt ∼ p(y | xt) . (2) Intuitively this rule can be seen to update its hidden state using data from the previous time step and then propose a new query point. In what follows we will apply this RNN, with shared parameters, to many steps of a black- box optimization process. An example of this computation is shown in Figure 1. Additionally, note that in order to generate the first query x1 we arbitrarily set the initial “ob- servations” to dummy values x0 = 0 and y0 = 0; this is a point we will return to in Section 2.3. # 2.1. Loss Function
1611.03824#7
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
8
height of the image), the agent-relative lateral and rotational velocity vt ∈ R6, the previous action at # 1 ∈ RNA , and the previous reward rt − − Figure 2d shows the augmentation of the Nav A3C with the different possible auxiliary losses. In particular we consider predicting depth from the convolutional layer (we will refer to this choice as D1), or from the top LSTM layer (D2) or predicting loop closure (L). The auxiliary losses are computed on the current frame via a single layer MLP. The agent is trained by applying a weighted sum of the gradients coming from A3C, the gradients from depth prediction (multiplied with βd1 , βd2) and the gradients from the loop closure (scaled by βl). More details of the online learning algorithm are given in Appendix B. 2.1 DEPTH PREDICTION
1611.03673#8
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
8
# 2.1. Loss Function Given this rule we now need a way to learn the parameters θ with stochastic gradient descent for any given distribu- tion of differentiable functions p(f ). Perhaps the simplest loss function one could use is the loss of the final itera- tion: Lfinal(θ) = Ef,y1:T −1[f (xT )] for some time-horizon T . This loss was considered by Andrychowicz et al. (2016) in the context of learning first-order optimizers, but ulti- mately rejected in favor of the summed loss T Leum(9) = Eyyser_s >» ro) . (3) t=1
1611.03824#8
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
9
2.1 DEPTH PREDICTION The primary input to the agent is in the form of RGB images. However, depth information, covering the central field of view of the agent, might supply valuable information about the 3D structure of the environment. While depth could be directly used as an input, we argue that if presented as an additional loss it is actually more valuable to the learning process. In particular if the prediction loss shares representation with the policy, it could help build useful features for RL much faster, bootstrapping learning. Since we know from (Eigen et al., 2014) that a single frame can be enough to predict depth, we know this auxiliary task can be learnt. A comparison between having depth as input versus as an additional loss is given in Appendix C, which shows significant gain for depth as a loss.
1611.03673#9
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
9
T Leum(9) = Eyyser_s >» ro) . (3) t=1 A key reason to prefer Lsum is that the amount of infor- mation conveyed by Lfinal is temporally very sparse. By instead utilizing a sum of losses to train the optimizer we are able to provide information from every step along this trajectory. Although at test time the optimizer typically only has access to the observation yt, at training time the true loss can be used. Note that optimizing the summed loss is equivalent to finding a strategy which minimizes the expected cumulative regret. Finally, while in many opti- mization tasks the loss associated with the best observation mint f (xt) is often desired, the cumulative regret can be seen as a proxy for this quantity. Learning to Learn without Gradient Descent by Gradient Descent Xo Yt-2 Figure 1. Computational graph of the learned black-box optimizer unrolled over multiple steps. The learning process will consist of differentiating the given loss with respect to the RNN parameters
1611.03824#9
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
10
Since the role of the auxiliary loss is just to build up the representation of the model, we do not necessarily care about the specific performance obtained or nature of the prediction. We do care about the data efficiency aspect of the problem and also computational complexity. If the loss is to be useful for the main task, we should converge faster on it compared to solving the RL problem (using less data samples), and the additional computational cost should be minimal. To achieve this we use a low resolution variant of the depth map, reducing the screen resolution to 4x16 pixels2. We explore two different variants for the loss. The first choice is to phrase it as a regression task, the most natural choice. While this formulation, combined with a higher depth resolution, extracts the most information, mean square error imposes a unimodal distribution (van den Oord et al., 2016). To address this possible issue, we also consider a classification loss, where depth at each position is discretised into 8 different bands. The bands are non-uniformally distributed such that we pay more attention to far-away objects (details in Appendix B). The motivation for the classification formulation is that while it greatly reduces the resolution of depth, it is more flexible from a learning perspective and can result in faster convergence (hence faster bootstrapping).
1611.03673#10
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
10
Figure 1. Computational graph of the learned black-box optimizer unrolled over multiple steps. The learning process will consist of differentiating the given loss with respect to the RNN parameters By using the above objective function we will be encour- aged to trade off exploration and exploitation and hence globally optimize the function f . This is due to the fact that in expectation, any method that is better able to explore and find small values of f (x) will be rewarded for these discov- eries. However the actual process of optimizing the above loss can be difficult due to the fact that nothing explicitly encourages the optimizer itself to explore. of the loss with respect to the RNN parameters θ and per- form stochastic gradient descent (SGD). In order to eval- uate these derivatives we assume that derivatives of f can be computed with respect to its inputs. This assumption is made only in order to backpropagate errors from the loss to the parameters, but crucially is not needed at test time. If the derivatives of f are also not available at training time then it would be necessary to approximate these derivatives via an algorithm such as REINFORCE (Williams, 1992). We can encourage exploration in the space of optimizers by encoding an exploratory force directly into the meta learn- ing loss function. Many examples exist in the bandit and Bayesian optimization communities, for example # T
1611.03824#10
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
11
2The image is cropped before being subsampled to lessen the floor and ceiling which have little relevant depth information. 3 # Under review as a conference paper at ICLR 2017 2.2 LOOP CLOSURE PREDICTION Loop closure, like depth, is valuable for a navigating agent, since can be used for efficient exploration and spatial reasoning. To produce the training targets, we detect loop closures based on the similarity of local position information during an episode, which is obtained by integrating 2D velocity over time. Specifically, in a trajectory noted {po,pi,...,pr}, where p; is the position of the agent at time t, we define a loop closure label J, that is equal to 1 if the position p; of the agent is close to the position py at an earlier time t’. In order to avoid trivial loop closures on consecutive points of the trajectory, we add an extra condition on an intermediary position p, being far from p;. Thresholds 7, and 12 provide these two limits. Learning to predict the binary loop label is done by minimizing the Bernoulli loss £; between /, and the output of a single-layer output from the hidden representation h, of the last hidden layer of the model, followed by a sigmoid activation. # 3 RELATED WORK
1611.03673#11
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
11
# T T SO El | ne) (4) t=1 Le(9) = -E pyr # 2.2. Training Function Distribution To this point we have made no assumptions about the dis- tribution of training functions p(f ). In this work we are interested in learning general-purpose black-box optimiz- ers, and we desire our distribution to be quite broad. where EI(·) is the expected posterior improvement of querying xt given observations up to time t. This can encourage exploration by giving an explicit bonus to the optimizer rather than just implicitly doing so by means of function evaluations. Alternatively, it is possible to use the observed improvement (OI) T Lou(®) = Efyurs » min { f(x) ~ min(Fos)) “ t=1 (5) # T We also studied a loss based on GP-UCB (Srinivas et al., 2010) but in preliminary experiments this did not perform as well as the EI loss and is thus not included in the later experiments. The illustration of Figure 1 shows the optimizer unrolled over many steps, ultimately culminating in the loss func- tion. To train the optimizer we will simply take derivatives
1611.03824#11
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
12
# 3 RELATED WORK There is a rich literature on navigation, primarily in the robotics literature. However, here we focus on related work in deep RL. Deep Q-networks (DQN) have had breakthroughs in extremely challenging domains such as Atari (Mnih et al., 2015). Recent work has developed on-policy RL methods such as advantage actor-critic that use asynchronous training of multiple agents in parallel (Mnih et al., 2016). Recurrent networks have also been successfully incorporated to enable state disambiguation in partially observable environments (Koutnik et al., 2013; Hausknecht & Stone, 2015; Mnih et al., 2016; Narasimhan et al., 2015).
1611.03673#12
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]