doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1611.03824
12
The illustration of Figure 1 shows the optimizer unrolled over many steps, ultimately culminating in the loss func- tion. To train the optimizer we will simply take derivatives As a result we propose the use of GPs as a suitable train- ing distribution. Under the GP prior, the joint distribu- tion of function values at any finite set of query points fol- lows a multivariate Gaussian distribution (Rasmussen and Williams, 2006), and we generate a realization of the train- ing function incrementally at the query points using the chain rule with a total time complexity of O(T 3) for ev- ery function sample. The use of functions sampled from a GP prior also pro- vides functions whose gradients can be easily evaluated at training time as noted above. Further, the posterior ex- pected improvement used within LEI can be easily com- puted (Moˇckus, 1982) and differentiated as well. Search strategies based on GP losses, such as LEI, can be thought of as a distilled strategies. The major downside of search strategies which are based on GP inference is their cubic complexity. Learning to Learn without Gradient Descent by Gradient Descent X Vt Xe Yer Ot algorithm.
1611.03824#12
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
13
Deep RL has recently been used in the navigation domain. Kulkarni et al. (2016) used a feedforward architecture to learn deep successor representations that enabled behavioral flexibility to reward changes in the MazeBase gridworld, and provided a means to detect bottlenecks in 3D VizDoom. Zhu et al. (2016) used a feedforward siamese actor-critic architecture incorporating a pretrained ResNet to support navigation to a target in a discretised 3D environment. Oh et al. (2016) investigated the performance of a variety of networks with external memory (Weston et al., 2014) on simple navigation tasks in the Minecraft 3D block world environment. Tessler et al. (2016) also used the Minecraft domain to show the benefit of combining feedforward deep-Q networks with the learning of resuable skill modules (cf options: (Sutton et al., 1999)) to transfer between navigation tasks. Tai & Liu (2016) trained a convnet DQN-based agent using depth channel inputs for obstacle avoidance in 3D environments. Barron et al. (2016) investigated how well a convnet can predict the depth channel from RGB in the Minecraft environment, but did not use depth for training the agent.
1611.03673#13
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
13
Learning to Learn without Gradient Descent by Gradient Descent X Vt Xe Yer Ot algorithm. A well-trained optimizer must learn to condition on ot−1 in order to either generate initial queries or generate queries based on past observations. Another key point to consider is that the batch nature of the optimizer can result in the ordering of queries being permuted: i.e. although xt is pro- posed before xt+1 it is entirely plausible that xt+1 is eval- uated first. In order to account for this at training time and not allow the optimizer to rely on a specific ordering, we simulate a runtime ∆t ∼ Uniform(1 − σ, 1 + σ) associated with the t-th query. Observations are then made based on the order in which they complete. It is worth noting that the sequential setting is a special case of this parallel policy where N = 1 and every observation is made with ot−1 = 1. Note also that we have kept the number of workers fixed for simplicity of explanation only. The architecture allows for the number of workers to vary.
1611.03824#13
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
14
Auxiliary tasks have often been used to facilitate representation learning (Suddarth & Kergosien, 1990). Recently, the incorporation of additional objectives, designed to augment representation learning through auxiliary reconstructive decoding pathways (Zhang et al., 2016; Rasmus et al., 2015; Zhao et al., 2015; Mirowski et al., 2010), has yielded benefits in large scale classification tasks. In deep RL settings, however, only two previous papers have examined the benefit of auxiliary tasks. Specifically, Li et al. (2016) consider a supervised loss for fitting a recurrent model on the hidden representations to predict the next observed state, in the context of imitation learning of sequences provided by experts, and Lample & Chaplot (2016) show that the performance of a DQN agent in a first-person shooter game in the VizDoom environment can be substantially enhanced by the addition of a supervised auxiliary task, whereby the convolutional network was trained on an enemy-detection task, with information about the presence of enemies, weapons, etc., provided by the game engine. In contrast, our contribution addresses fundamental questions of how to learn an intrinsic repre- sentation of space, geometry, and movement while simultaneously maximising rewards through reinforcement learning. Our method is validated in challenging maze domains with random start and goal locations.
1611.03673#14
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
14
Figure 2. Graph depicting a single iteration of the parallel algo- rithm with N workers. Here we explicitly illustrate the fact that the query xt is being assigned to the i-th worker for evaluation, and that the the next observation pair (˜xt, ˜yt) is the output of the j-th worker, which has completed its function evaluation. While training with a GP prior grants us the convenience to assess the efficacy of our training algorithm by compar- ing head-to-head with GP-based methods, it is worth not- ing that our model can be trained with any distribution that permits efficient sampling and function differentiation. The flexibility could become useful when considering problems with specific prior knowledge and/or side information.
1611.03824#14
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
15
# 4 EXPERIMENTS We consider a set of first-person 3D mazes from the DeepMind Lab environment (Beattie et al., 2016) (see Fig. 1) that are visually rich, with additional observations available to the agent such as inertial 4 # Under review as a conference paper at ICLR 2017 (a) Static maze (small) (b) Static maze (large) (c) Random Goal I-maze (d) Random Goal maze (small) (e) Random Goal maze (large) (f) Random Goal maze (large): different formu- lation of depth prediction
1611.03673#15
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
15
It is instructive to contrast this strategy with what is done in parallel Bayesian optimization (Desautels et al., 2014; Snoek et al., 2012). There care must be taken to ensure that a diverse set of queries are utilized—in the absence of additional data the standard sequential strategy would pro- pose N queries at the same point. Exactly computing the optimal N -step query is typically intractable, and as a re- sult hand-engineered heuristics are employed. Often this involves synthetically reducing the uncertainty associated with outstanding queries in order to simulate later observa- tions. In contrast, our RNN optimizer can store in its hid- den state any relevant information about outstanding obser- vations. Decisions about what to store are learned during training and as a result should be more directly related to later losses. # 2.3. Parallel Function Evaluation The use of parallel function evaluation is a common tech- nique in Bayesian optimization, often used for costly, but easy to simulate functions. For example, as illustrated in the experiments, when searching for hyper-parameters of deep networks, it is convenient to train several deep net- works in parallel. Suppose we have N workers, and that the process of proposing candidates for function evaluation is much faster than evaluating the functions. We augment our RNN opti- mizer’s input with a binary variable ot as follows:
1611.03824#15
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
16
— nav aaceoi02. 116) — Human expert 95) 0 02 oa 06 08 10 Py — naw a2cro1 (251) — nav aac+02 (266) — nav a2cio1024 (260) 0a 06 08 10 100 50 — av A240 79) = Nav 020-401 (80) 230402 (96) — Nav a2c+01021 (35 — Human expert (206) 00 02 oa 08 08 io FF ARC (69) 100 | — Nav a3c+02 (203) Naw a3¢+02021. 201) — Human expere(a7a) 00 02 oa 06 08 10 — nav a3c102 (203) — Human expert (a72) ‘00 02 oa 06 Figure 3: Rewards achieved by the agents on 5 different tasks: two static mazes (small and large) with fixed goals, two static mazes with comparable layout but with dynamic goals and the I-maze. Results are averaged over the top 5 random hyperparameters for each agent-task configuration. Star in the label indicates the use of reward clipping. Please see text for more details.
1611.03673#16
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
16
ht, xt = RNNθ(ht−1, ot−1, ˜xt−1, ˜yt−1). # 3. Experiments We present several experiments that show the breadth of generalization that is achieved by our learned algo- rithms. We train our algorithms to optimize very simple functions—samples from a GP with a fixed length scale— and show that the learned algorithms are able to generalize from these simple objective functions to a wide variety of other test functions that were not seen during training. We experimented with two different RNN architectures: LSTMs and DNCs. However, we found the DNCs to per- form slightly (but not significantly) better. For clarity, we only show plots for DNCs in most of the figures. For the first t ≤ N steps, we set ot−1 = 0, arbitrarily set the inputs to dummy values ˜xt−1 = 0 and ˜yt−1 = 0, and generate N parallel queries x1:N . As soon as a worker finishes evaluating a query, the query and its evaluation are fed back to the network by setting ot−1 = 1, resulting in a new query xt. Figure 2 displays a single iteration of this
1611.03824#16
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
17
information and local depth information.3 The action space is discrete, yet allows finegrained control, comprising 8 actions: the agent can rotate in small increments, accelerate forward or backward or sideways, or induce rotational acceleration while moving. Reward is achieved in these environments by reaching a goal from a random start location and orientation. If the goal is reached, the agent is respawned to a new start location and must return to the goal. The episode terminates when a fixed amount of time expires, affording the agent enough time to find the goal several times. There are sparse ‘fruit’ rewards which serve to encourage exploration. Apples are worth 1 point, strawberries 2 points and goals are 10 points. Videos of the agent solving the maze are linked in Appendix A.
1611.03673#17
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
18
In the static variant of the maze, the goal and fruit locations are fixed and only the agent’s start location changes. In the dynamic (Random Goal) variant, the goal and fruits are randomly placed on every episode. Within an episode, the goal and apple locations stay fixed until the episode ends. This encourages an explore-exploit strategy, where the agent should initially explore the maze, then retain the goal location and quickly refind it after each respawn. For both variants (static and random goal) we consider a small and large map. The small mazes are 5 × 10 and episodes last for 3600 timesteps, and the large mazes are 9 × 15 with 10800 steps (see Figure 1). The RGB observation is 84 × 84. The I-Maze environment (see Figure 1, right) is inspired by the classic T-maze used to investigate navigation in rodents (Olton et al., 1979): the layout remains fixed throughout, the agent spawns in the central corridor where there are apple rewards and has to locate the goal which is placed in the alcove of one of the four arms. Because the goal is hidden in the alcove, the optimal agent behaviour must rely on memory of the goal location in order to return to the goal using the most direct route. Goal location is constant within an episode but varies randomly across episodes.
1611.03673#18
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
18
GP samples, dim=1 -1.0 GP samples, dim=3 o -0.6 © — Spearmint 32 3 --- Spearmint Fixed Hyper-parameters o G $ -0.8 $ — TPE S & — sMAC 5-10 S — DNC sum fa fa 2-12 3 — DNCOI £ © — DNCEI = 14 = 0 20 40 60 80 100 0 20 40 60 80 100 “15 GP samples, dim=6 0 o Compare DNC vs LSTM, dim=6. © © © — DNC sum 3 3 _ S_ == STM sum 37° ar ai — onc ol > > > -- LSTMOI cH - - 5-25 gS -2 §-2 — onc el Fy B Fe) -- LSTME! < -3.0 2-3 €-3 Ej 5 5 2 2 2 £ 3.5 £4 E-4 = = = -4.0 -5 + r r -54 ; r r r 0 0 20 40 60 80 100 0 20. 40 60 80 100
1611.03824#18
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
19
The different agent architectures described in Section 2 are evaluated by training on the five mazes. Figure 3 shows learning curves (averaged over the 5 top performing agents). The agents are a feedforward model (FF A3C), a recurrent model (LSTM A3C), the stacked LSTM version with velocity, previous action and reward as input (Nav A3C), and Nav A3C with depth prediction from the convolution layer (Nav A3C+D1), Nav A3C with depth prediction from the last LSTM layer (Nav A3C+D2), Nav A3C with loop closure prediction (Nav A3C+L) as well as the Nav A3C with 3The environments used in this paper are publicly available at https://github.com/deepmind/lab. 5 # Under review as a conference paper at ICLR 2017
1611.03673#19
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
19
Figure 3. Average minimum observed function value, with 95% confidence intervals, as a function of search steps on functions sampled from the training GP distribution. Left four figures: Comparing DNC with different reward functions against Spearmint with fixed and estimated GP hyper-parameters, TPE and SMAC. Right bottom: Comparing different DNCs and LSTMs. As the dimension of the search space increases, the DNC’s performance improves relative to the baselines. DNC sum 50 DNC OI 50 DNC El 50 Spearmint Fixed Hypers 50 1.0 ao 1-2 ao 1-0 ao 1-24 40 0.8 0.8 0.8 0.8 f 0.6 30 06 30 0.6 30 0.6 30 0.4 20 04 20 04 20 0.44 20 0.2 0.2 0.2 0.2 4 10 10 10 10 0.0 0.0 0.0 0.0 ) +o 0) 0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
1611.03824#19
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
20
Figure 4: left: Example of depth predictions (pairs of ground truth and predicted depths), sampled every 40 steps. right: Example of loop closure prediction. The agent starts at the gray square and the trajectory is plotted in gray. Blue dots correspond to true positive outputs of the loop closure detector; red cross correspond to false positives and green cross to false negatives. Note the false positives that occur when the agent is actually a few squares away from actual loop closure. all auxiliary losses considered together (Nav A3C+D1D2L). In each case we ran 64 experiments with randomly sampled hyper-parameters (for ranges and details please see the appendix). The mean over the top 5 runs as well as the top 5 curves are plotted. Expert human scores, established by a professional game player, are compared to these results. The Nav A3C+D2 agents reach human-level performance on Static 1 and 2, and attain about 91% and 59% of human scores on Random Goal 1 and 2.
1611.03673#20
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
20
Figure 4. How different methods trade-off exploration and exploitation in a one-dimensional example. Blue: Unknown function being optimized. Green crosses: Function values at query points. Red trajectory: Query points over 50 steps. parameters for the RNN optimization algorithm (such as learning rate, number of hidden units, and memory size for the DNC models) are found through grid search dur- ing training. When ready to be used as an optimizer, the RNN requires neither tuning of hyper-parameters nor hand- engineering. It is fully automatic. with integer inputs, we treat them as piece-wise constant functions and round the network output to the closest val- ues. We evaluate the performance at a given search step t ≤ T = 100, according to the minimum observed func- tion value up to step t, mini≤t f (xi). In the following experiments, DNC sum refers to the DNC network trained using the summed loss Lsum, DNC OI to the network trained using the loss LOI, and DNC EI to the network trained with the loss LEI.
1611.03824#20
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
21
In Mnih et al. (2015) reward clipping is used to stabilize learning, technique which we employed in this work as well. Unfortunately, for these particular tasks, this yields slightly suboptimal policies because the agent does not distinguish apples (1 point) from goals (10 points). Removing the reward clipping results in unstable behaviour for the base A3C agent (see Appendix C). However it seems that the auxiliary signal from depth prediction mediates this problem to some extent, resulting in stable learning dynamics (e.g. Figure 3f, Nav A3C+D1 vs Nav A3C*+D1). We clearly indicate whether reward clipping is used by adding an asterisk to the agent name. Figure 3f also explores the difference between the two formulations of depth prediction, as a regression task or a classification task. We can see that the regression agent (Nav A3C*+D1[MSE]) performs worse than one that does classification (Nav A3C*+D1). This result extends to other maps, and we therefore only use the classification formulation in all our other results4. Also we see that predicting depth from the last LSTM layer (hence providing structure to the recurrent layer, not just the convolutional ones) performs better.
1611.03673#21
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
21
We compare our learning to learn approach with popu- lar state-of-the-art Bayesian optimization packages, includ- ing Spearmint with automatic inference of the GP hyper- parameters and input warping to deal with non-stationarity (Snoek et al., 2014), Hyperopt (TPE) (Bergstra et al., 2011), and SMAC (Hutter et al., 2011b). For test functions # 3.1. Performance on Functions Sampled from the Training Distribution We first evaluate performance on functions sampled from the training distribution. Notice, however, that these func- tions are never observed during training. Figure 3 shows the best observed function values as a function of search step t, averaged over 10, 000 sampled functions for RNN models and 100 sampled functions for other models (we can afford to do more for RNNs because they are very fast optimizers). For Spearmint, we consider both the default Learning to Learn without Gradient Descent by Gradient Descent
1611.03824#21
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
22
We note some particular results from these learning curves. In Figure 3 (a and b), consider the feedforward A3C model (red curve) versus the LSTM version (pink curve). Even though navigation seems to intrinsically require memory, as single observations could often be ambiguous, the feed- forward model achieves competitive performance on static mazes. This suggest that there might be good strategies that do not involve temporal memory and give good results, namely a reactive policy held by the weights of the encoder, or learning a wall-following strategy. This motivates the dynamic environments that encourage the use of memory and more general navigation strategies.
1611.03673#22
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
22
Branin dim=2 GoldsteinPrice dim=2 GoldsteinPrice dim=2 -0.80 1.0 1.0 0.5 0.5 $ -0.85 gs Soo = z= 0.0 2 ? -0.90 > -0.5 7 -0.5 < c c S 95 S -1.0 S -1.0 5-0. 5 = g 9-15 2 -15 2 -1.00 2 -2.04 2 -2.0 5 < -2.54 © -2.5 = -1.05 5 30 5 30 -1.10 -3.5 3575 2 1 49° 1 0 20 40 60 80 100 0 20 60 80 100 10° 10% 107 10° 10 0.0 Hartmann3 dim=3 0.0 Hartmann6 dim=6 0.0 Hartmann6 dim=6 - — Spearmint @ -0.5 p 2-05 $-05 @-10 —~ TPE B_10 @ 10 > — SMAC ao 7 e715 c c S 50 — DNC sum § -15 §-15 5-2. 5 Fs S35 — DNC Ol ¢ -2.04 8 -2.0 ae DNC El 225 225 © -3.0 oc S = 35 = -3.0 = -3.0 -4.0 ; -3.5
1611.03824#22
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
23
Figure 3 also shows the advantage of adding velocity, reward and action as an input, as well as the impact of using a two layer LSTM (orange curve vs red and pink). Though this agent (Nav A3C) is better than the simple architectures, it is still relatively slow to train on all of the mazes. We believe that this is mainly due to the slower, data inefficient learning that is generally seen in pure RL approaches. Supporting this we see that adding the auxiliary prediction targets of depth and loop closure (Nav A3C+D1D2L, black curve) speeds up learning dramatically on most of the mazes (see Table 1: AUC metric). It has the strongest effect on the static mazes because of the accelerated learning, but also gives a substantial and lasting performance increase on the random goal mazes. Although we place more value on the task performance than on the auxiliary losses, we report the results from the loop closure prediction task. Over 100 test episodes of 2250 steps each, within a large maze (random goal 2), the Nav A3C*+D1L agent demonstrated very successful loop detection, reaching an F-1 score of 0.83. A sample trajectory can be seen in Figure 4 (right).
1611.03673#23
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
24
Figure 5. Left: Average minimum observed function value, with 95% confidence intervals, as a function of search steps on 4 benchmark functions: Branin, Goldstein price, 3d-Hartmann and 6d-Hartmann. Again we see that as the dimension of the search space increases, the learned DNC optimizers are more effective than the Spearmint, TPE and SMAC packages within the training horizon. Right: Average minimum observed function value in terms of the optimizer’s run-time (seconds), illustrating the superiority in speed of the DNC optimizers over existing black-box optimization methods. setting with a prior distribution that estimates the GP hyper- parameters by Monte Carlo and a setting with the same hyper-parameters as those used in training. For the second setting, Spearmint knows the ground truth and thus pro- vides a very competitive baseline. As expected Spearmint with a fixed prior proves to be one of the best models under most settings. When the input dimension is 6 or higher, however, neural network models start to outper- form Spearmint. We suspect it is because in higher di- mensional spaces, the RNN optimizer learns to be more exploitative given the fixed number of iterations. Among all RNNs, those trained with expected/observed improve- ment perform better than those trained with direct function observations.
1611.03824#24
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
25
Mean over top 5 agents Highest reward agent Maze Agent AUC Score % Human Goals Position Acc Latency 1:>1 I-Maze FF A3C* LSTM A3C* Nav A3C*+D1L Nav A3C+D2 Nav A3C+D1D2L 75.5 112.4 169.7 203.5 199.9 98 244 266 268 258 - - - - - 94/100 100/100 100/100 100/100 100/100 42.2 87.8 68.5 62.3 61.0 9.3s:9.0s 15.3s:3.2s 10.7s:2.7s 8.8s:2.5s 9.9s:2.5s Static 1 FF A3C* LSTM A3C* Nav A3C+D2 Nav A3C+D1D2L 41.3 44.3 104.3 102.3 79 98 119 116 83 103 125 122 100/100 100/100 100/100 100/100 64.3 88.6 95.4 94.5 8.8s:8.7s 6.1s:5.9s 5.9s:5.4s 5.9s:5.4s Static 2 FF A3C* LSTM A3C* Nav
1611.03673#25
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
25
ranging from 2 to 6. To obtain a more robust evaluation of the performance of each model, we generate multiple in- stances for each benchmark function by applying a random translation (−0.1–0.1), scaling (0.9–1.1), flipping, and di- mension permutation in the input domain. The lef hand side of Figure 5 shows the minimum observed function values achieved by the learned DNC optimizers, and contrasts these against the ones attained by Spearmint, TPE and SMAC. All methods appear to have similar perfor- mance with Spearming doing slightly better in low dimen- sions. As the dimension increases, we see that the DNC op- timizers converge at at a much faster rate within the horizon of T = 100 steps. Figure 4 shows the query trajectories xt, t = 1, . . . , 100, for different black-box optimizers in a one-dimensional ex- ample. All of the optimizers explore initially, and later set- tle in one mode and search more locally. The DNCs trained with EI behave most similarly to Spearmint. DNC with di- rect function observations (DNC sum) tends to explore less than the other optimizers and often misses the global opti- mum, while the DNCs trained with the observed improve- ment (OI) keep exploring even in later stages.
1611.03824#25
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
26
5.9s:5.4s 5.9s:5.4s Static 2 FF A3C* LSTM A3C* Nav A3C+D2 Nav A3C+D1D2L 35.8 46.0 157.6 156.1 81 153 200 192 47 91 116 112 100/100 100/100 100/100 100/100 55.6 80.4 94.0 92.6 24.2s:22.9s 15.5s:14.9s 10.9s:11.0s 11.1s:12.0s Random Goal 1 FF A3C* LSTM A3C* Nav A3C+D2 Nav A3C+D1D2L 37.5 46.6 71.1 64.2 61 65 96 81 57.5 61.3 91 76 88/100 85/100 100/100 81/100 51.8 51.1 85.5 83.7 11.0:9.9s 11.1s:9.2s 14.0s:7.1s 11.5s:7.2s Random Goal 2 FF A3C* LSTM A3C* Nav A3C*+D1L Nav A3C+D2 Nav A3C+D1D2L 50.0 37.5 62.5 82.1
1611.03673#26
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
26
# 3.2. Transfer to Global Optimization Benchmarks We compare the algorithms on four standard bench- mark functions for black-box optimization with dimensions We also observe that DNC OI and DNC EI both outperform DNC with direct obsevations of the loss (DNC sum). It is encouraging that the curves for DNC OI and DNC EI are so close. While DNC EI is distilling a popular acquisition function from the EI literature, the DNC OI variant is much easier to train as it never requires the GP computations nec- essary to construct the EI acquisition function. The right hand side of Figure 5 shows that the neural net- work optimizers run about 104 times faster than Spearmint and 102 times faster than TPE and SMAC with the DNC architecture. There is an additional 5 times speedup when using the LSTM architecture, as shown in Table 1. The negligible runtime of our optimizers suggests new areas of application for global optimization methods that require Learning to Learn without Gradient Descent by Gradient Descent both high sample efficiency and real-time performance. Table 1. Run-time (seconds) for 100 iterations excluding the black-box function evaluation time.
1611.03824#26
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
28
Table 1: Comparison of four agent architectures over five maze configurations, including random and static goals. AUC (Area under learning curve), Score, and % Human are averaged over the best 5 hyperparameters. Evaluation of a single best performing agent is done through analysis on 100 test episodes. Goals gives the number of episodes where the goal was reached one more more times. Position Accuracy is the classification accuracy of the position decoder. Latency 1:>1 is the average time to the first goal acquisition vs. the average time to all subsequent goal acquisitions. Score is the mean score over the 100 test episodes. # 5 ANALYSIS 5.1 POSITION DECODING
1611.03673#28
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
28
# 3.3. Transfer to a Simple Control Problem We also consider an application to a simple reinforce- ment learning task described by (Hoffman et al., 2009). In this problem we simulate a physical system consisting of a number of repellers which affect the fall of particles through a 2D-space. The goal is to direct the path of the particles through high reward regions of the state space and maximize the accumulated discounted reward. The four- dimensional state-space in this problem consists of a par- ticle’s position and velocity. The path of the particles can be controlled by the placement of repellers which push the particles directly away with a force inversely proportional to their distance from the particle. At each time step the particle’s position and velocity are updated using simple deterministic physical forward simulation. The control pol- icy for this problem consists of 3 learned parameters for each repeller: 2d location and the strength of the repeller. # sample trajectory repeller, O @ repeller, contours of immediate reward Repellers -1 — Spearmint — TPE — SMAC -2 — DNC sum — DNCOI -3 — DNCEI Min function value -6+4 T T T T 0 20 40 60 80 100
1611.03824#28
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
29
# 5 ANALYSIS 5.1 POSITION DECODING In order to evaluate the internal representation of location within the agent (either in the hidden units ht of the last LSTM, or, in the case of the FF A3C agent, in the features ft on the last layer of the conv-net), we train a position decoder that takes that representation as input, consisting of a linear classifier with multinomial probability distribution over the discretized maze locations. Small mazes (5 × 10) have 50 locations, large mazes (9 × 15) have 135 locations, and the I-maze has 77 locations. Note that we do not backpropagate the gradients from the position decoder through the rest of the network. The position decoder can only see the representation exposed by the model, not change it.
1611.03673#29
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
29
In our experiments we consider a problem with 2 repellers, i.e. 6 parameters. An example trajectory along with the re- ward structure (contours) and repeller positions (circles) is displayed in Figure 6. We apply the same perturbation as in the previous subsection to study the average performance. The loss (minimal negative reward) of all models are also plotted in Figure 6. Neural network models outperform all the other competitors in this problem. Figure 6. Top: An example trajectory of a falling particle in red, where solid circles show the position and strength of the two re- pellers and contour lines show the reward function. The aim to to position and choose the strength of the repellers so that the par- ticle spends more time in regions of high reward. Bottom: The results of each method on optimizing the controller by direct pol- icy search. Here, the learned DNC OI optimizer appears to have an edge over the other techniques. # 3.4. Transfer to ML Hyper-parameter Tuning
1611.03824#29
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
30
An example of position decoding by the Nav A3C+D2 agent is shown in Figure 6, where the initial uncertainty in position is improved to near perfect position prediction as more observations are acquired by the agent. We observe that position entropy spikes after a respawn, then decreases once the agent acquires certainty about its location. Additionally, videos of the agent’s position decoding are linked in Appendix A. In these complex mazes, where localization is important for the purpose of reaching the goal, it seems that position accuracy and final score are correlated, as shown in Table 1. A pure feed-forward architecture still achieves 64.3% accuracy in a static maze with static goal, suggesting that the encoder memorizes the position in the weights and that this small maze is solvable by all the agents, with sufficient training time. In Random Goal 1, it is Nav A3C+D2 that achieves the best position decoding performance (85.5% accuracy), whereas the FF A3C and the LSTM A3C architectures are at approximately 50%.
1611.03673#30
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
30
# 3.4. Transfer to ML Hyper-parameter Tuning Lastly, we consider hyper-parameter tuning for machine learning problems. We include the three standard bench- marks in the HPOLib package (Eggensperger et al., 2013): SVM, online LDA, and logistic regression with 3, 3, and 4 hyper-parameters respectively. We also consider the prob- lem of training a 6-hyper-parameter residual network for classification on the CIFAR-100 dataset. For the first three problems, the objective functions have al- ready been pre-computed on a grid of hyper-parameter val- ues, and therefore evaluation with different random seeds (100 for Spearmint, 1000 for TPE and SMAC) is cheap. For the last experiment, however, it takes at least 16 GPU hours to evaluate one hyper-parameter setting. For this rea- son, we test the parallel proposal idea introduced in Section 2.3, with 5 parallel proposal mechanisms. This approach is about five times more efficient. For the first three tasks, our model is run once because the setup is deterministic. For the residual network task, there is some random variation so we consider three runs per method.
1611.03824#30
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
31
In the I-maze, the opposite branches of the maze are nearly identical, with the exception of very sparse visual cues. We observe that once the goal is first found, the Nav A3C*+D1L agent is capable of directly returning to the correct branch in order to achieve the maximal score. However, the linear position decoder for this agent is only 68.5% accurate, whereas it is 87.8% in the plain LSTM A3C agent. We hypothesize that the symmetry of the I-maze will induce a symmetric policy that need not be sensitive to the exact position of the agent (see analysis below). 7 # Under review as a conference paper at ICLR 2017 IW value function | \y | o © 100 200 300 400 s00 600 700 e00 0 409 200 300 400 500 600 700 #00 Time step in episode Time step in episode value function IW © 100 200 300 400 s00 600 700 e00 Time step in episode value function value function | \y | o 0 409 200 300 400 500 600 700 #00 Time step in episode
1611.03673#31
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
31
The results are shown in Figure 7. The plots report the negative accuracy against number of function evaluations up to a horizon of T = 100. The neural network models especially when trained with observed improvement show competitive performance against the engineered solutions. In the ResNet experiment, we also compare our sequential DNC optimizers with the parallel versions with 5 work- ers. In this experiment we find that the learned and engi- neered parallel optimizers perform as well if not slightly Learning to Learn without Gradient Descent by Gradient Descent SVM, dim=3 Min function value 80 60 0 20 40 Logistic Regression, dim=4 0.12 go 2° rR oR o R 1 L 0.09 0.08 + Min function value 0.07 + 40 60 80 Online LDA, dim=3 3505 w 13404 3 =z 13304 2 3204 © 13104 5 2 1300 + 2 12904 £ 12804 = 1270 4 260 0 20 40 60 80 0.40 ResNet on Cifar-100, dim=6 . — Spearmint 0) ~0.45 -- Spearmint Parallel = . — TPE > _ — sMAC c 0.50 — DNC sum 2 — pDNcol 2 —0.55 == DNC OI Parallel 5 _ — DNCEI c 0.60 + DNC El Parallel = -0.65
1611.03824#31
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
32
Figure 5: Trajectories of the Nav A3C*+D1L agent in the I-maze (left) and of the Nav A3C+D2 random goal maze 1 (right) over the course of one episode. At the beginning of the episode (gray curve on the map), the agent explores the environment until it finds the goal at some unknown location (red box). During subsequent respawns (blue path), the agent consistently returns to the goal. The value function, plotted for each episode, rises as the agent approaches the goal. Goals are plotted as vertical red lines.
1611.03673#32
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
32
Figure 7. Average test loss, with 95% confidence intervals, for the SVM, online LDA, and logistic regression hyper-parameter tuning benchmarks. The bottom-right plot shows the performance of all methods on the problem of tuning a residual network, demonstrating that the learned DNC optimizers are close in performance to the engineered optimizers, and that the faster parallel versions work comparably well. better than the sequential ones. These minor differences arise from random variation. optimizers. The parallel version of the algorithm also performed well when tuning the hyper-parameters of an expensive-to-train residual network. # 4. Conclusions and Future Work The experiments have shown that up to the training hori- zon the learned RNN optimizers are able to match the per- formance of heavily engineered Bayesian optimization so- lutions, including Spearmint, SMAC and TPE. The trained RNNs rely on neither heuristics nor hyper-parameters when being deployed as black-box optimizers. The optimizers trained on synthetic functions were able to transfer successfully to a very wide class of black-box func- tions, associated with GP bandits, control, global optimiza- tion benchmarks, and hyper-parameter tuning.
1611.03824#32
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
33
Figure 6: Trajectory of the Nav A3C+D2 agent in the random goal maze 1, overlaid with the position probability predictions predicted by a decoder trained on LSTM hidden activations, taken at 4 steps during an episode. Initial uncertainty gives way to accurate position prediction as the agent navigates. A desired property of navigation agents in our Random Goal tasks is to be able to first find the goal, and reliably return to the goal via an efficient route after subsequent re-spawns. The latency column in Table 1 shows that the Nav A3C+D2 agents achieve the lowest latency to goal once the goal has been discovered (the first number shows the time in seconds to find the goal the first time, and the second number is the average time for subsequent finds). Figure 5 shows clearly how the agent finds the goal, and directly returns to that goal for the rest of the episode. For Random Goal 2, none of the agents achieve lower latency after initial goal acquisition; this is presumably due to the larger, more challenging environment. 5.2 STACKED LSTM GOAL ANALYSIS
1611.03673#33
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
33
However, the current RNN optimizers also have some shortcomings. Training for very long horizons is difficult. This issue was also documented recently in (Duan et al., 2016). We believe curriculum learning should be investi- gated as a way of overcoming this difficulty. In addition, a new model has to be trained for every input dimension with the current network architecture. While training optimizers for every dimension is not prohibitive in low dimensions, future works should extend the RNN structure to allow a variable input dimension. A promising solution is to seri- alize the input vectors along the search steps. The experiments have also shown that the RNNs are mas- sively faster than other Bayesian optimization methods. Hence, for applications involving a known horizon and where speed is crucial, we recommend the use of the RNN # References M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, B. Shillingford, and N. de Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, 2016. Learning to Learn without Gradient Descent by Gradient Descent
1611.03824#33
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
34
5.2 STACKED LSTM GOAL ANALYSIS Figure 7(a) shows shows the trajectories traversed by an agent for each of the four goal locations. After an initial exploratory phase to find the goal, the agent consistently returns to the goal location. We visualize the agent’s policy by applying tSNE dimension reduction (Maaten & Hinton, 2008) to the cell activations at each step of the agent for each of the four goal locations. Whilst clusters corresponding to each of the four goal locations are clearly distinct in the LSTM A3C agent, there are 2 main clusters in the Nav A3C agent – with trajectories to diagonally opposite arms of the maze represented similarly. Given that the action sequence to opposite arms is equivalent (e.g. straight, turn left twice for top left and bottom right goal locations), this suggests that the Nav A3C policy-dictating LSTM maintains an efficient representation of 2 sub-policies (i.e. rather than 4 independent policies) – with critical information about the currently relevant goal provided by the additional LSTM. INVESTIGATING DIFFERENT COMBINATIONS OF AUXILIARY TASKS
1611.03673#34
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
34
Learning to Learn without Gradient Descent by Gradient Descent Y. Bengio, S. Bengio, J. Cloutier, and J. Gecsei. On the optimiza- tion of a synaptic learning rule. In Conference on Optimality in Biological and Artificial Networks, 1992. Data mining and knowledge discovery, 18(1):140–181, 2009. S. Li and J. Malik. Learning to optimize. In International Conference on Learning Representations, 2017. J. S. Bergstra, R. Bardenet, Y. Bengio, and B. K´egl. Algorithms for hyper-parameter optimization. In Advances in Neural In- formation Processing Systems, pages 2546–2554, 2011. In Systems Modeling and Optimization, volume 38, pages 473– 481. Springer, 1982. E. Brochu, V. M. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. Technical Report UBC TR-2009-23 and arXiv:1012.2599v1, Dept. of Computer Science, University of British Columbia, 2009.
1611.03824#34
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
35
INVESTIGATING DIFFERENT COMBINATIONS OF AUXILIARY TASKS Our results suggest that depth prediction from the policy LSTM yields optimal results. However, several other auxiliary tasks have been concurrently introduced in (Jaderberg et al., 2017), and thus we provide a comparison of reward prediction against depth prediction. Following that paper, we implemented two additional agent architectures, one performing reward prediction from the convnet using a replay buffer, called Nav A3C*+R, and one combining reward prediction from the convnet and depth prediction from the LSTM (Nav A3C+RD2). Table 2 suggests that reward prediction (Nav A3C*+R) improves upon the plain stacked LSTM architecture (Nav A3C*) but not as much as depth prediction from the policy LSTM (Nav A3C+D2). Combining reward prediction and depth prediction (Nav A3C+RD2) yields comparable results to depth prediction alone (Nav A3C+D2); normalised average AUC values are respectively 0.995 vs. 0.981. Future work will explore other auxiliary tasks. 8 # Under review as a conference paper at ICLR 2017 12 Gas Dap 21 awe” Ny Vy D
1611.03673#35
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
35
D. K. Naik and R. Mammone. Meta-neural networks that learn by learning. In International Joint Conference on Neural Net- works, volume 1, pages 437–442, 1992. C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006. S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in multi- armed bandits problems. In International Conference on Algo- rithmic Learning Theory, 2009. S. Ravi and H. Larochelle. Optimization as a model for few-shot learning. In International Conference on Learning Represen- tations, 2017. T. Desautels, A. Krause, and J. Burdick. Parallelizing exploration- exploitation tradeoffs with Gaussian process bandit optimiza- tion. Journal of Machine Learning Research, 2014. A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lilli- crap. Meta-learning with memory-augmented neural networks. In International Conference on Machine Learning, 2016.
1611.03824#35
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
36
12 Gas Dap 21 Se awe” Ny Vy D Wo Se (a) Agent trajectories for episodes with different goal locations (b) LSTM activations from A3C agent (c) A3C*+D1L agent LSTM activations from Nav Figure 7: LSTM cell activations of LSTM A3C and Nav A3C*+D1L agents from the I-Maze collected over multiple episodes and reduced to 2 dimensions using tSNE, then coloured to represent the goal location. Policy-dictating LSTM of Nav A3C agent shown. Maze Nav A3C* Nav A3C+D1 Navigation agent architecture Nav A3C+D1D2 Nav A3C+D2 Nav A3C*+R Nav A3C+RD2 I-Maze Static 1 Static 2 Random Goal 1 Random Goal 2 143.3 60.1 59.9 45.5 37.0 196.7 103.2 153.1 57.6 66.0 203.5 104.3 157.6 71.1 82.1 197.2 100.3 151.6 63.2 75.1 128.2 86.9 100.6 54.4 68.3 191.8 105.1 155.5 72.3 80.1
1611.03673#36
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
36
Y. Duan, J. Schulman, X. Chen, P. Bartlett, I. Sutskever, and P. Abbeel. Rl2: Fast reinforcement learning via slow reinforce- ment learning. Technical report, UC Berkeley and OpenAI, 2016. Evolutionary Principles in Self-Referential Learning. On Learning how to Learn: The Meta-Meta-Meta...- Hook. PhD thesis, Institut f. Informatik, Tech. Univ. Munich, 1987. K. Eggensperger, M. Feurer, F. Hutter, J. Bergstra, J. Snoek, H. Hoos, and K. Leyton-Brown. Towards an empirical founda- tion for assessing bayesian optimization of hyperparameters. In NIPS workshop on Bayesian Optimization in Theory and Practice, 2013. V. Gabillon, M. Ghavamzadeh, and A. Lazaric. Best arm iden- tification: A unified approach to fixed budget and fixed confi- dence. In Advances in Neural Information Processing Systems, pages 3212–3220, 2012.
1611.03824#36
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
37
Table 2: Comparison of five navigation agent architectures over five maze configurations with random and static goals, including agents performing reward prediction Nav A3C*+R and Nav A3C+RD2, where reward prediction is implemented following (Jaderberg et al., 2017). We report the AUC (Area under learning curve), averaged over the best 5 hyperparameters. # 6 CONCLUSION We proposed a deep RL method, augmented with memory and auxiliary learning targets, for training agents to navigate within large and visually rich environments that include frequently changing start and goal locations. Our results and analysis highlight the utility of un/self-supervised auxiliary objectives, namely depth prediction and loop closure, in providing richer training signals that bootstrap learning and enhance data efficiency. Further, we examine the behavior of trained agents, their ability to localise, and their network activity dynamics, in order to analyse their navigational abilities.
1611.03673#37
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
37
A. Graves, G. Wayne, M. Reynolds, T. Harley, I. Danihelka, A. Grabska-Barwi ˚Aska, S. G. Colmenarejo, E. Grefenstette, T. Ramalho, J. Agapiou, A. A. P. Badia, K. M. Hermann, Y. Zwols, G. Ostrovski, A. Cain, H. King, C. Summerfield, P. Blunsom, K. Kavukcuoglu, and D. Hassabis. Hybrid com- puting using a neural network with dynamic external memory. Nature, 2016. H. F. Harlow. The formation of learning sets. Psychological re- view, 56(1):51, 1949. S. L. Scott. A modern Bayesian look at the multi-armed ban- dit. Applied Stochastic Models in Business and Industry, 26 (6):639–658, 2010.
1611.03824#37
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
38
Our approach of augmenting deep RL with auxiliary objectives allows end-end learning and may encourage the development of more general navigation strategies. Notably, our work with auxiliary losses is related to (Jaderberg et al., 2017) which independently looks at data efficiency when exploiting auxiliary losses. One difference between the two works is that our auxiliary losses are online (for the current frame) and do not rely on any form of replay. Also the explored losses are very different in nature. Finally our focus is on the navigation domain and understanding if navigation emerges as a bi-product of solving an RL problem, while Jaderberg et al. (2017) is concerned with data efficiency for any RL-task.
1611.03673#38
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
38
B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Fre- itas. Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104(1):148–175, 2016. J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian opti- mization of machine learning algorithms. In Advances in Neu- ral Information Processing Systems, pages 2951–2959, 2012. J. Snoek, K. Swersky, R. S. Zemel, and R. P. Adams. Input warp- ing for Bayesian optimization of non-stationary functions. In International Conference on Machine Learning, 2014. E. S. Spelke and K. D. Kinzler. Core knowledge. Developmental science, 10(1):89–96, 2007. N. Srinivas, A. Krause, S. M. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting: No regret and ex- In International Conference on Machine perimental design. Learning, pages 1015–1022, 2010. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neu- ral computation, 9(8):1735–1780, 1997.
1611.03824#38
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
39
Whilst our best performing agents are relatively successful at navigation, their abilities would be stretched if larger demands were placed on rapid memory (e.g. in procedurally generated mazes), due to the limited capacity of the stacked LSTM in this regard. It will be important in the future to combine visually complex environments with architectures that make use of external memory (Graves et al., 2016; Weston et al., 2014; Olton et al., 1979) to enhance the navigational abilities of agents. Further, whilst this work has focused on investigating the benefits of auxiliary tasks for developing the ability to navigate through end-to-end deep reinforcement learning, it would be interesting for future work to compare these techniques with SLAM-based approaches. ACKNOWLEDGEMENTS 9 # Under review as a conference paper at ICLR 2017 We would like to thank Alexander Pritzel, Thomas Degris and Joseph Modayil for useful discussions, Charles Beattie, Julian Schrittwieser, Marcus Wainwright, and Stig Petersen for environment design and development, and Amir Sadik and Sarah York for expert human game testing. # REFERENCES Trevor Barron, Matthew Whitehead, and Alan Yeung. Deep reinforcement learning in a 3-d block- world environment. In Deep Reinforcement Learning: Frontiers and Challenges, IJCAI, 2016.
1611.03673#39
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
39
S. Hochreiter and J. Schmidhuber. Long short-term memory. Neu- ral computation, 9(8):1735–1780, 1997. S. Thrun and L. Pratt. Learning to learn. Springer Science & Business Media, 1998. S. Hochreiter, A. S. Younger, and P. R. Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pages 87–94. Springer, 2001. M. W. Hoffman, H. Kueck, N. de Freitas, and A. Doucet. New in- ference strategies for solving Markov decision processes using In Uncertainty in Artificial Intelli- reversible jump MCMC. gence, pages 223–231, 2009. F. Hutter, H. H. Hoos, and K. Leyton-Brown. Sequential model- based optimization for general algorithm configuration. In LION, pages 507–523, 2011a. F. Hutter, H. H. Hoos, and K. Leyton-Brown. Sequential model- based optimization for general algorithm configuration. In In- ternational Conference on Learning and Intelligent Optimiza- tion, pages 507–523. Springer, 2011b.
1611.03824#39
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
40
Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich KÃijttler, Andrew Lefrancq, Simon Green, Victor Valdes, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, and Stig Petersen. Deepmind lab. In arXiv, 2016. URL https://arxiv.org/ abs/1612.03801. MWM Gamini Dissanayake, Paul Newman, Steve Clark, Hugh F. Durrant-Whyte, and Michael Csorba. A solution to the simultaneous localization and map building (slam) problem. IEEE Transactions on Robotics and Automation, 17(3):229–241, 2001. David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a multi-scale deep network. In Proc. of Neural Information Processing Systems, NIPS, 2014. Alex Graves, Mohamed Abdelrahman, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing, ICASSP, 2013.
1611.03673#40
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03824
40
J. X. Wang, Z. Kurth-Nelson, D. Tirumala, H. Soyer, J. Z. Leibo, R. Munos, C. Blundell, D. Kumaran, and M. Botvinick. Learn- ing to reinforcement learn. arXiv Report 1611.05763, 2016. Z. Wang, B. Shakibi, L. Jin, and N. de Freitas. Bayesian multi- scale optimistic optimization. In AI and Statistics, pages 1005– 1014, 2014. L. B. Ward. Reminiscence and rote learning. Psychological Monographs, 49(4), 1937. R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8 (3-4):229–256, 1992. B. Zoph and Q. V. Le. Neural architecture search with reinforce- ment learning. In International Conference on Learning Rep- resentations, 2017. E. J. Kehoe. A layered network model of associative learning: learning to learn and configuration. Psychological review, 95 (4):411, 1988. R. Kohavi, R. Longbotham, D. Sommerfield, and R. M. Henne. Controlled experiments on the web: survey and practical guide.
1611.03824#40
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
http://arxiv.org/pdf/1611.03824
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
stat.ML, cs.LG
Accepted by ICML 2017. Previous version "Learning to Learn for Global Optimization of Black Box Functions" was published in the Deep Reinforcement Learning Workshop, NIPS 2016
null
stat.ML
20161111
20170612
[]
1611.03673
41
Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwi´nska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016. Matthew J. Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps. Proc. of Conf. on Artificial Intelligence, AAAI, 2015. Max Jaderberg, Volodymir Mnih, Wojciech Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In Submitted to Int’l Conference on Learning Representations, ICLR, 2017. Jan Koutnik, Giuseppe Cuccu, JÃijrgen Schmidhuber, and Faustino Gomez. Evolving large-scale In Proceedings of the 15th annual neural networks for vision-based reinforcement learning. conference on Genetic and evolutionary computation, GECCO, 2013.
1611.03673#41
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
42
Tejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J. Gershman. Deep successor reinforcement learning. CoRR, abs/1606.02396, 2016. URL http://arxiv.org/abs/1606. 02396. Guillaume Lample and Devendra Singh Chaplot. Playing FPS games with deep reinforcement learning. CoRR, 2016. URL http://arxiv.org/abs/1609.05521. Xiujun Li, Lihong Li, Jianfeng Gao, Xiaodong He, Jianshu Chen, Li Deng, and Ji He. Recurrent reinforcement learning: A hybrid approach. In Proceedings of the International Conference on Learning Representations, ICLR, 2016. URL https://arxiv.org/abs/1509.03044. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579–2605, 2008. Piotr Mirowski, Marc’Aurelio Ranzato, and Yann LeCun. Dynamic auto-encoders for semantic indexing. In NIPS Deep Learning and Unsupervised Learning Workshop, 2010.
1611.03673#42
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
43
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, et al. Human-level control through deep reinforcement learning. Nature, 518:529–533, 2015. Volodymyr Mnih, AdriÃ˘a Puigdomôlnech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proc. of Int’l Conf. on Machine Learning, ICML, 2016. 10 # Under review as a conference paper at ICLR 2017 Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, et al. Massively parallel methods for deep reinforcement learning. In Proceedings of the International Conference on Machine Learning Deep Learning Workshop, ICML, 2015. Karthik Narasimhan, Tejas D. Kulkarni, and Regina Barzilay. Language understanding for text-based games using deep reinforcement learning. In Proc. of Empirical Methods in Natural Language Processing, EMNLP, 2015.
1611.03673#43
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
44
Junhyuk Oh, Valliappa Chockalingam, Satinder P. Singh, and Honglak Lee. Control of memory, active perception, and action in minecraft. In Proc. of International Conference on Machine Learning, ICML, 2016. David S Olton, James T Becker, and Gail E Handelmann. Hippocampus, space, and memory. Behavioral and Brain Sciences, 2(03):313–322, 1979. Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026, 2013. Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems, NIPS, 2015. Steven C Suddarth and YL Kergosien. Rule-injection hints as a means of improving network performance and learning time. In Neural Networks, pp. 120–129. Springer, 1990. Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1):181–211, 1999.
1611.03673#44
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
45
Lei Tai and Ming Liu. Towards cognitive exploration through deep reinforcement learning for mobile robots. In arXiv, 2016. URL https://arxiv.org/abs/1610.01733. Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J. Mankowitz, and Shie Mannor. A deep hierarchical approach to lifelong learning in minecraft. CoRR, abs/1604.07255, 2016. URL http://arxiv.org/abs/1604.07255. Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5 – rmsprop: Divide the gradient by a running average of its recent magnitude. In Coursera: Neural Networks for Machine Learning, volume 4, 2012. A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. 2016. Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014. Yuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsu- pervised objectives for large-scale image classification. In Proc. of International Conference on Machine Learning, ICML, 2016.
1611.03673#45
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
46
Junbo Zhao, Michaël Mathieu, Ross Goroshin, and Yann LeCun. Stacked what-where auto-encoders. Int’l Conf. on Learning Representations (Workshop), ICLR, 2015. URL http://arxiv.org/ abs/1506.02351. Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J. Lim, Abhinav Gupta, Li Fei-Fei, and Ali Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning. CoRR, abs/1609.05143, 2016. URL http://arxiv.org/abs/1609.05143. 11 Under review as a conference paper at ICLR 2017 # Supplementary Material A VIDEOS OF TRAINED NAVIGATION AGENTS
1611.03673#46
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
47
11 Under review as a conference paper at ICLR 2017 # Supplementary Material A VIDEOS OF TRAINED NAVIGATION AGENTS We show the behaviour of Nav A3C*+D1L agent in 5 videos, corresponding to the 5 navigation environments: I-maze5, (small) static maze6, (large) static maze7, (small) random goal maze8 and (large) random goal maze9. Each video shows a high-resolution video (the actual inputs to the agent are down-sampled to 84×84 RGB images), the value function over time (with fruit reward and goal acquisitions), the layout of the mazes with consecutive trajectories of the agent marked in different colours and the output of the trained position decoder, overlayed on top of the maze layout. B NETWORK ARCHITECTURE AND TRAINING B.1 THE ONLINE MULTI-LEARNER ALGORITHM FOR MULTI-TASK LEARNING
1611.03673#47
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
48
B NETWORK ARCHITECTURE AND TRAINING B.1 THE ONLINE MULTI-LEARNER ALGORITHM FOR MULTI-TASK LEARNING We introduce a class of neural network-based agents that have modular structures and that are trained on multiple tasks, with inputs coming from different modalities (vision, depth, past rewards and past actions). Implementing our agent architecture is simplified by its modular nature. Essentially, we construct multiple networks, one per task, using shared building blocks, and optimise these networks jointly. Some modules, such as the conv-net used for perceiving visual inputs, or the LSTMs used for learning the navigation policy, are shared among multiple tasks, while other modules, such as depth predictor gd or loop closure predictor gl, are task-specific. The navigation network that outputs the policy and the value function is trained using reinforcement learning, while the depth prediction and loop closure prediction networks are trained using self-supervised learning.
1611.03673#48
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
49
Within each thread of the asynchronous training environment, the agent plays on its own episode of the game environment, and therefore sees observation and reward pairs {(st, rt)} and takes actions that are different from those experienced by agents from the other, parallel threads. Within a thread, the multiple tasks (navigation, depth and loop closure prediction) can be trained at their own schedule, and they add gradients to the shared parameter vector as they arrive. Within each thread, we use a flag-based system to subordinate gradient updates to the A3C reinforcement learning procedure. B.2 NETWORK AND TRAINING DETAILS
1611.03673#49
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
50
For all the experiments we use an encoder model with 2 convolutional layers followed by a fully connected layer, or recurrent layer(s), from which we predict the policy and value function. The architecture is similar to the one in (Mnih et al.||2016). The convolutional layers are as follows. The first convolutional layer has a kernel of size 8x8 and a stride of 4x4, and 16 feature maps. The second layer has a kernel of size 4x4 and a stride of 2x2, and 32 feature maps. The fully connected layer, in the FF A3C architecture in Figure[2h has 256 hidden units (and outputs visual features f,). The LSTM in the LSTM A3C architecture has 256 hidden units (and outputs LSTM hidden activations h,). The LSTMs in Figure[2b and[2}! are fed extra inputs (past reward r;_1, previous action a; expressed as a one-hot vector of dimension 8 and agent-relative lateral and rotational velocity v; encoded by a 6-dimensional vector), which are all concatenated to vector f;. The Nav A3C architectures (Figure [2f.d) have a first LSTM with 64 or 128 hiddens and
1611.03673#50
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
51
all concatenated to vector f;. The Nav A3C architectures (Figure [2f.d) have a first LSTM with 64 or 128 hiddens and a second LSTM with 256 hiddens. The depth predictor modules gq, g/, and the loop closure detection module gq are all single-layer MLPs with 128 hidden units. The depth MLPs are followed by 64 independent 8-dimensional softmax outputs (one per depth pixel). The loop closure MLP is followed by a 2-dimensional softmax output. We illustrate on Figure|8]the architecture of the Nav A3C+D+L+Dr agent.
1611.03673#51
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
52
Depth is taken as the Z-buffer from the Labyrinth environment (with values between 0 and 255), divided by 255 and taken to power 10 to spread the values in interval [0, 1]. We empirically decided to use the following quantization: {0, 0.05, 0.175, 0.3, 0.425, 0.55, 0.675, 0.8, 1} to ensure a uniform 5Video of the Nav A3C*+D1L agent on the I-maze: https://youtu.be/PS4iJ7Hk_BU 6Video of the Nav A3C*+D1L agent on static maze 1: https://youtu.be/-HsjQoIou_c 7Video of the Nav A3C*+D1L agent on static maze 2: https://youtu.be/kH1AvRAYkbI 8Video of the Nav A3C*+D1L agent on random goal maze 1: https://youtu.be/5IBT2UADJY0 9Video of the Nav A3C*+D1L agent on random goal maze 2: https://youtu.be/e10mXgBG9yo 1 # Under review as a conference paper at ICLR 2017
1611.03673#52
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
53
1 # Under review as a conference paper at ICLR 2017 128 64x8 —H galt) 84x84 BxB/ 4x4 4x4 / 2x2 8 64 26 =[——0 V 128 hy 2 t—|| 0 gi(e) 6 128 64x8 : | sith) Figure 8: Details of the architecture of the Nav A3C+D+L+Dr agent, taking in RGB visual inputs x;, past reward r;—1, previous action a;—1 as well as agent-relative velocity v;, and producing policy 7, value function V, depth predictions ga(f;) and g/;(hz) as well as loop closure detection gi (hz). binning across 8 classes. The previous version of the agent had a single depth prediction MLP gd for regressing 8 × 16 = 128 depth pixels from the convnet outputs ft. The parameters of each of the modules point to a subset of a common vector of parameters. We optimise these parameters using an asynchronous version of RMSProp (Tieleman & Hinton, 2012). (Nair et al., 2015) was a recent example of asynchronous and parallel gradient updates in deep reinforcement learning; in our case, we focus on the specific Asynchronous Advantage Actor Critic (A3C) reinforcement learning procedure in (Mnih et al., 2016).
1611.03673#53
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
54
Learning follows closely the paradigm described in (Mnih et al., 2016). We use 16 workers and the same RMSProp algorithm without momentum or centering of the variance. Gradients are computed over non-overlaping chunks of the episode. The score for each point of a training curve is the average over all the episodes the model gets to finish in 5e4 environment steps. The whole experiments are run for a maximum of 1e8 environment step. The agent has an action repeat of 4 as in (Mnih et al., 2016), which means that for 4 consecutive steps the agent will use the same action picked at the beginning of the series. For this reason through out the paper we actually report results in terms of agent perceived steps rather than environment steps. That is, the maximal number of agent perceived step that we do for any particular run is 2.5e7. In our grid we sample hyper-parameters from categorical distributions: 4]. 4, 5 · 10− • Learning rate was sampled from [10− 4, 10− • Strength of the entropy regularization from [10− • Rewards were not scaled and not clipped in the new set of experiments. In our previous set of experiments, rewards were scaled by a factor from {0.3, 0.5} and clipped to 1 prior to back-propagation in the Advantage Actor-Critic algorithm. 3].
1611.03673#54
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
55
3]. • Gradients are computed over non-overlaping chunks of 50 or 75 steps of the episode. In our previous set of experiments, we used chunks of 100 steps. The auxiliary tasks, when used, have hyperparameters sampled from: • Coefficient βd of the depth prediction loss from convnet features Ld sampled from {3.33, 10, 33}. © Coefficient 8’, of the depth prediction loss from LSTM hiddens La sampled from {1, 3.33, 10}. Coefficient βl of the loop closure prediction loss Ll sampled from {1, 3.33, 10}. Loop closure uses the following thresholds: maximum distance for position similarity η1 = 1 square and minimum distance for removing trivial loop-closures η2 = 2 squares. 2 # Under review as a conference paper at ICLR 2017 FF A3C* (61) LSTM A3C* (65) Nav A3C* (73) FF A3C (50) LSTM A3¢ (51) Nav A3C (49) Human expert (106) 100 0.0 0.2 0.4 0.6 08 10 1e8
1611.03673#55
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
56
100 50 — RGBD Nav A3C* (68) — Nav A3C*+D1 (MSE) (70) — Nav A3C+01 (80) — Nav A3C+02 (96) — Human expert (106) 0) 0.0 0.2 0.4 0.6 0.8 10 18 FF A3C* (61) LSTM A3C* (65) 100 Nav A3C* (73) FF A3C (50) LSTM A3¢ (51) Nav A3C (49) Human expert (106) 100 50 — RGBD Nav A3C* (68) — Nav A3C*+D1 (MSE) (70) — Nav A3C+01 (80) — Nav A3C+02 (96) — Human expert (106) 0) 0.0 0.2 0.4 0.6 08 10 0.0 0.2 0.4 0.6 0.8 10 1e8 18 (a) Random Goal maze (small): comparison of reward clipping (b) Random Goal maze (small): comparison of depth prediction Figure 9: Results are averaged over the top 5 random hyperparameters for each agent-task configuration. Star in the label indicates the use of reward clipping. Please see text for more details. # C ADDITIONAL RESULTS
1611.03673#56
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
57
# C ADDITIONAL RESULTS C.1 REWARD CLIPPING Figure 9 shows additional learning curves. In particular in the left plot we show that the baselines (A3C FF and A3C LSTM) as well as Nav A3C agent without auxiliary losses, perform worse without reward clipping than with reward clipping. It seems that removing reward clipping makes learning unstable in absence of auxiliary tasks. For this particular reason we chose to show the baselines with reward clipping in our main results. C.2 DEPTH PREDICTION AS REGRESSION OR CLASSIFICATION TASKS The right subplot of Figure 9 compares having depth as an input versus as a target. Note that using RGBD inputs to the Nav A3C agent performs even worse than predicting depth as a regression task, and in general is worse than predicting depth as a classification task. C.3 NON-NAVIGATION TASKS IN 3D MAZE ENVIRONMENTS
1611.03673#57
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
58
We have evaluated the behaviour of the agents introduced in this paper, as well as agents with reward prediction, introduced in (Jaderberg et al., 2017) (Nav A3C*+R) and with a combination of reward prediction from the convnet and depth prediction from the policy LSTM (Nav A3C+RD2), on different 3D maze environments with non-navigation specific tasks. In the first environment, Seek-Avoid Arena, there are apples (yielding 1 point) and lemons (yielding -1 point) disposed in an arena, and the agents needs to pick all the apples before respawning; episodes last 20 seconds. The second environment, Stairway to Melon, is a thin square corridor; in one direction, there is a lemon followed by a stairway to a melon (10 points, resets the level) and in the other direction are 7 apples and a dead end, with the melon visible but not reachable. The agent spawns between the lemon and the apples with a random orientation. Both environments have been released in DeepMind Lab (Beattie et al., 2016). These environments do not require navigation skills such as shortest path planning, but a simple reward
1611.03673#58
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
59
have been released in DeepMind Lab (Beattie et al., 2016). These environments do not require navigation skills such as shortest path planning, but a simple reward identification (lemon vs. apple or melon) and persistent exploration. As Figure 10 shows, there is no major difference between auxiliary tasks related to depth prediction or reward prediction. Depth prediction boosts the performance of the agent beyond that of the stacked LSTM architecture, hinting at a more general applicability of depth prediction beyond navigation tasks.
1611.03673#59
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
60
# C.4 SENSITIVITY TOWARDS HYPER-PARAMETER SAMPLING For each of the experiments in this paper, 64 replicas were run with hyperparameters (learning rate, entropy cost) sampled from the same interval. Figure 11 shows that the Nav architectures with 3 # Under review as a conference paper at ICLR 2017 30 FF A3C (29) 20 LSTM A3C (36) Nav A3C (39) Nav A3C+D1 (40) Nav A3C+D2 (40) Nav A3C-+D1D2 (39) Nav A3C+R (40) Nav A3C-+RD2 (40) 10 0 0.0 0.2 0.4 168 150 100 FF A3C (14) LSTM A3C (81) Nav A3C (170) Nav A3C+D1 (170) Nav A3C+D2 (168) Nav A3C+D1D2 (131) Nav A3C#R (170) Nav A3C+RD2 (169) 0 2 1e7 (a) Seek-Avoid (learning curves) (b) Stairway to Melon (learning curves) +10 Melon -ILemon Agent +1 Apple
1611.03673#60
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
61
(c) Seek-Avoid (layout) (d) Stairway to Melon (layout) Figure 10: Comparison of agent architectures over non-navigation maze configurations, Seek-Avoid Arena and Stairway to Melon, described in details in (Beattie et al., 2016). Image credits for (c) and (d): (Jaderberg et al., 2017). — Frasce 0 — ism ace — Nav act 60 — Nev asceso1 — Nav mace. — Nev ascrspioa. Reward AUC 8 ° Sc a a Experiment reolcas, sorted be decreasing AUC (a) Static maze (small) (b) Random Goal maze (large) (c) Random Goal I-maze Frasce ism asce Naw see Nav a3c+401 a0 300 nav aaron. — nav a3c++01020 80 60 Reward AUC 40 a Experiment replicas, sorted be decreasing AUC Frasce — naw ace nav 03-101 — nav e401 nav asc+s01020 Reward AUC ° i Experiment replicas, sorted be decreasing AUC
1611.03673#61
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
62
Frasce — naw ace nav 03-101 — nav e401 nav asc+s01020 Reward AUC ° i Experiment replicas, sorted be decreasing AUC Figure 11: Plot of the Area Under the Curve (AUC) of the rewards achieved by the agents, across different experiments and on 3 different tasks: large static maze with fixed goals, large static maze with comparable layout but with dynamic goals, and the I-maze. The reward AUC values are computed for each replica; 64 replicas were run per experiment and the reward AUC values are sorted by decreasing value. auxiliary tasks achieve higher results for a comparatively larger number of replicas, hinting at the fact that auxiliary tasks make learning more robust to the choice of hyperparameters. # C.5 ASYMPTOTIC PERFORMANCE OF THE AGENTS Finally, we compared the asymptotic performance of the agents, both in terms of navigation (final rewards obtained at the end of the episode) and in terms of their representation in the policy LSTM. Rather than visualising the convolutional filters, we quantify the change in representation, with and 4 # Under review as a conference paper at ICLR 2017
1611.03673#62
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03673
63
4 # Under review as a conference paper at ICLR 2017 Agent architecture Frames Performance LSTM A3C* Nav A3C+D2 120M Score (mean top 5) Position Acc 57 33.4 103 72.4 240M Score (mean top 5) Position Acc 90 64.1 114 80.6 Table 3: Asymptotic performance analysis of two agents in the Random Goal 2 maze, comparing training for 120M Labyrinth frames vs. 240M frames. without auxiliary task, in terms of position decoding, following the approach explained in Section 5.1. Specifically, we compare the baseline agent (LSTM A3C*) to a navigation agent with one auxiliary task (depth prediction), that gets about twice as many gradient updates for the same number of frames seen in the environment: once for the RL task and once for the auxiliary depth prediction task. As Table 3 shows, the performance of the baseline agent as well as the position decoding accuracy do significantly increase after twice the number of training steps (going from 57 points to 90 points, and from 33.4% to 66.5%, but do not reach the performance and position decoding accuracy of the Nav A3C+D2 agent after half the number of training frames. For this reason, we believe that the auxiliary task do more than simply accelerate training. 5
1611.03673#63
Learning to Navigate in Complex Environments
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.
http://arxiv.org/pdf/1611.03673
Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, Raia Hadsell
cs.AI, cs.CV, cs.LG, cs.RO
11 pages, 5 appendix pages, 11 figures, 3 tables, under review as a conference paper at ICLR 2017
null
cs.AI
20161111
20170113
[]
1611.03530
0
7 1 0 2 b e F 6 2 ] G L . s c [ 2 v 0 3 5 3 0 . 1 1 6 1 : v i X r a # UNDERSTANDING DEEP LEARNING REQUIRES RE- THINKING GENERALIZATION Chiyuan Zhang* Massachusetts Institute of Technology [email protected] Samy Bengio Google Brain bengio@google. com # Moritz Hardt Google Brain [email protected] # Benjamin Recht! University of California, Berkeley [email protected] Oriol Vinyals Google DeepMind [email protected] # ABSTRACT Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model fam- ily, or to the regularization techniques used during training.
1611.03530#0
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
1
Through extensive systematic experiments, we show how these traditional ap- proaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a ran- dom labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by com- pletely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks al- ready have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models. # 1 INTRODUCTION Deep artificial neural networks often have far more trainable model parameters than the number of samples they are trained on. Nonetheless, some of these models exhibit remarkably small gener- alization error, i.e., difference between “training error” and “test error”. At the same time, it is certainly easy to come up with natural model architectures that generalize poorly. What is it then that distinguishes neural networks that generalize well from those that don’t? A satisfying answer to this question would not only help to make neural networks more interpretable, but it might also lead to more principled and reliable model architecture design.
1611.03530#1
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
2
To answer such a question, statistical learning theory has proposed a number of different complexity measures that are capable of controlling generalization error. These include VC dimension (Vapnik, 1998), Rademacher complexity (Bartlett & Mendelson, 2003), and uniform stability (Mukherjee et al., 2002; Bousquet & Elisseeff, 2002; Poggio et al., 2004). Moreover, when the number of parameters is large, theory suggests that some form of regularization is needed to ensure small generalization error. Regularization may also be implicit as is the case with early stopping. 1.1 OUR CONTRIBUTIONS In this work, we problematize the traditional view of generalization by showing that it is incapable of distinguishing between different neural networks that have radically different generalization per- formance. *Work performed while interning at Google Brain. + Work performed at Google Brain. Randomization tests. At the heart of our methodology is a variant of the well-known randomiza- tion test from non-parametric statistics (Edgington & Onghena, 2007). In a first set of experiments, we train several standard architectures on a copy of the data where the true labels were replaced by random labels. Our central finding can be summarized as: Deep neural networks easily fit random labels.
1611.03530#2
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
3
Deep neural networks easily fit random labels. More precisely, when trained on a completely random labeling of the true data, neural networks achieve 0 training error. The test error, of course, is no better than random chance as there is no correlation between the training labels and the test labels. In other words, by randomizing labels alone we can force the generalization error of a model to jump up considerably without changing the model, its size, hyperparameters, or the optimizer. We establish this fact for several different standard architectures trained on the CIFAR10 and ImageNet classification benchmarks. While simple to state, this observation has profound implications from a statistical learning perspective: 1. The effective capacity of neural networks is sufficient for memorizing the entire data set. 2. Even optimization on random labels remains easy. In fact, training time increases only by a small constant factor compared with training on the true labels. 3. Randomizing labels is solely a data transformation, leaving all other properties of the learn- ing problem unchanged.
1611.03530#3
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
4
3. Randomizing labels is solely a data transformation, leaving all other properties of the learn- ing problem unchanged. Extending on this first set of experiments, we also replace the true images by completely random pixels (e.g., Gaussian noise) and observe that convolutional neural networks continue to fit the data with zero training error. This shows that despite their structure, convolutional neural nets can fit random noise. We furthermore vary the amount of randomization, interpolating smoothly between the case of no noise and complete noise. This leads to a range of intermediate learning problems where there remains some level of signal in the labels. We observe a steady deterioration of the generalization error as we increase the noise level. This shows that neural networks are able to capture the remaining signal in the data, while at the same time fit the noisy part using brute-force. We discuss in further detail below how these observations rule out all of VC-dimension, Rademacher complexity, and uniform stability as possible explanations for the generalization performance of state-of-the-art neural networks. The role of explicit regularization. If the model architecture itself isn’t a sufficient regularizer, it remains to see how much explicit regularization helps. We show that explicit forms of regularization, such as weight decay, dropout, and data augmentation, do not adequately explain the generalization error of neural networks. Put differently:
1611.03530#4
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
5
Explicit regularization may improve generalization performance, but is neither necessary nor by itself sufficient for controlling generalization error. In contrast with classical convex empirical risk minimization, where explicit regularization is nec- essary to rule out trivial solutions, we found that regularization plays a rather different role in deep learning. It appears to be more of a tuning parameter that often helps improve the final test error of a model, but the absence of all regularization does not necessarily imply poor generalization er- ror. As reported by Krizhevsky et al. (2012), ¢2-regularization (weight decay) sometimes even helps optimization, illustrating its poorly understood nature in deep learning. Finite sample expressivity. We complement our empirical observations with a theoretical con- struction showing that generically large neural networks can express any labeling of the training data. More formally, we exhibit a very simple two-layer ReLU network with p = 2n + d parameters that can express any labeling of any sample of size n in d dimensions. A previous construction due to Livni et al. (2014) achieved a similar result with far more parameters, namely, O(dn). While our depth 2 network inevitably has large width, we can also come up with a depth k network in which each layer has only O(n/k) parameters.
1611.03530#5
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
6
While prior expressivity results focused on what functions neural nets can represent over the entire domain, we focus instead on the expressivity of neural nets with regards to a finite sample. In contrast to existing depth separations (Delalleau & Bengio, 2011; Eldan & Shamir, 2016; Telgarsky, 2016; Cohen & Shashua, 2016) in function space, our result shows that even depth-2 networks of linear size can already represent any labeling of the training data. The role of implicit regularization. While explicit regularizers like dropout and weight-decay may not be essential for generalization, it is certainly the case that not all models that fit the training data well generalize well. Indeed, in neural networks, we almost always choose our model as the output of running stochastic gradient descent. Appealing to linear models, we analyze how SGD acts as an implicit regularizer. For linear models, SGD always converges to a solution with small norm. Hence, the algorithm itself is implicitly regularizing the solution. Indeed, we show on small data sets that even Gaussian kernel methods can generalize well with no regularization. Though this doesn’t explain why certain architectures generalize better than other architectures, it does suggest that more investigation is needed to understand exactly what the properties are inherited by models that were trained using SGD. 1.2 RELATED WORK
1611.03530#6
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
7
1.2 RELATED WORK Hardt et al. (2016) give an upper bound on the generalization error of a model trained with stochastic gradient descent in terms of the number of steps gradient descent took. Their analysis goes through the notion of uniform stability (Bousquet & Elisseeff, 2002). As we point out in this work, uniform stability of a learning algorithm is independent of the labeling of the training data. Hence, the concept is not strong enough to distinguish between the models trained on the true labels (small generalization error) and models trained on random labels (high generalization error). This also highlights why the analysis of Hardt et al. (2016) for non-convex optimization was rather pessimistic, allowing only a very few passes over the data. Our results show that even empirically training neural networks is not uniformly stable for many passes over the data. Consequently, a weaker stability notion is necessary to make further progress along this direction.
1611.03530#7
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
8
There has been much work on the representational power of neural networks, starting from universal approximation theorems for multi-layer perceptrons (Cybenko, 1989; Mhaskar, 1993; Delalleau & Bengio, 2011; Mhaskar & Poggio, 2016; Eldan & Shamir, 2016; Telgarsky, 2016; Cohen & Shashua, 2016). All of these results are at the population level characterizing which mathematical functions certain families of neural networks can express over the entire domain. We instead study the repre- sentational power of neural networks for a finite sample of size n. This leads to a very simple proof that even O(n)-sized two-layer perceptrons have universal finite-sample expressivity.
1611.03530#8
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
9
Bartlett (1998) proved bounds on the fat shattering dimension of multilayer perceptrons with sig- moid activations in terms of the £;-norm of the weights at each node. This important result gives a generalization bound for neural nets that is independent of the network size. However, for RELU networks the ¢;-norm is no longer informative. This leads to the question of whether there is a dif- ferent form of capacity control that bounds generalization error for large neural nets. This question was raised in a thought-provoking work by Neyshabur et al. (2014), who argued through experi- ments that network size is not the main form of capacity control for neural networks. An analogy to matrix factorization illustrated the importance of implicit regularization. # 2 EFFECTIVE CAPACITY OF NEURAL NETWORKS
1611.03530#9
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
10
# 2 EFFECTIVE CAPACITY OF NEURAL NETWORKS Our goal is to understand the effective model capacity of feed-forward neural networks. Toward this goal, we choose a methodology inspired by non-parametric randomization tests. Specifically, we take a candidate architecture and train it both on the true data and on a copy of the data in which the true labels were replaced by random labels. In the second case, there is no longer any relationship between the instances and the class labels. As a result, learning is impossible. Intuition suggests that this impossibility should manifest itself clearly during training, e.g., by training not converging or slowing down substantially. To our surprise, several properties of the training process for multiple standard achitectures is largely unaffected by this transformation of the labels. This poses a conceptual challenge. Whatever justification we had for expecting a small generalization error to begin with must no longer apply to the case of random labels.
1611.03530#10
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
11
25 4.0 10 ma true labels ma Inception 0.9 b eee 2.0 e—e random labels 351) ee AlexNet 08 # #—* shuffled pixels | € 3 9||*—-* MLP 1x512 . 07 2ais — random pixels |] 2 © 06 3 5 Py gaussian ° 2:5 v £ vos gt £20 2 oa E ma Inception 0.5 1s 03 ee AlexNet 0.2 wee MLP 1x512 0.0 1.0 o1 0 5 10150 20S 0.0 0.2 04 06 O08 10 0.0 02 04 06 O08 10 thousand steps label corruption label corruption (a) learning curves (b) convergence slowdown (c) generalization error growth # a # i & Figure 1: Fitting random labels and random pixels on CIFARIO. (a) shows the training loss of various experiment settings decaying with the training steps. (b) shows the relative convergence time with different label corruption ratio. (c) shows the test error (also the generalization error since training error is 0) under different label corruptions. To gain further insight into this phenomenon, we experiment with different levels of randomization exploring the continuum between no label noise and completely corrupted labels. We also try out different randomizations of the inputs (rather than labels), arriving at the same general conclusion.
1611.03530#11
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
12
The experiments are run on two image classification datasets, the CIFAR10 dataset (Krizhevsky & Hinton, 2009) and the ImageNet (Russakovsky et al., 2015) ILSVRC 2012 dataset. We test the Inception V3 (Szegedy et al., 2016) architecture on ImageNet and a smaller version of Inception, Alexnet (Krizhevsky et al., 2012), and MLPs on CIFAR10. Please see Section A in the appendix for more details of the experimental setup. 2.1 FITTING RANDOM LABELS AND PIXELS We run our experiments with the following modifications of the labels and input images: ¢ True labels: the original dataset without modification. ¢ Partially corrupted labels: independently with probability p, the label of each image is corrupted as a uniform random class. * Random labels: all the labels are replaced with random ones. ¢ Shuffled pixels: a random permutation of the pixels is chosen and then the same permuta- tion is applied to all the images in both training and test set. * Random pixels: a different random permutation is applied to each image independently. * Gaussian: A Gaussian distribution (with matching mean and variance to the original image dataset) is used to generate random pixels for each image.
1611.03530#12
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
13
* Gaussian: A Gaussian distribution (with matching mean and variance to the original image dataset) is used to generate random pixels for each image. Surprisingly, stochastic gradient descent with unchanged hyperparameter settings can optimize the weights to fit to random labels perfectly, even though the random labels completely destroy the relationship between images and labels. We further break the structure of the images by shuffling the image pixels, and even completely re-sampling random pixels from a Gaussian distribution. But the networks we tested are still able to fit. Figure la shows the learning curves of the Inception model on the CIFAR10 dataset under vari- ous settings. We expect the objective function to take longer to start decreasing on random labels because initially the label assignments for every training sample is uncorrelated. Therefore, large predictions errors are back-propagated to make large gradients for parameter updates. However, since the random labels are fixed and consistent across epochs, the network starts fitting after going through the training set multiple times. We find the following observations for fitting random labels very interesting: a) we do not need to change the learning rate schedule; b) once the fitting starts, it converges quickly; c) it converges to (over)fit the training set perfectly. Also note that “random pixels” and “Gaussian” start converging faster than “random labels”. This might be because with
1611.03530#13
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
14
random pixels, the inputs are more separated from each other than natural images that originally belong to the same category, therefore, easier to build a network for arbitrary label assignments. On the CIFAR10 dataset, Alexnet and MLPs all converge to zero loss on the training set. The shaded rows in Table 1 show the exact numbers and experimental setup. We also tested random labels on the ImageNet dataset. As shown in the last three rows of Table 2 in the appendix, although it does not reach the perfect 100% top-1 accuracy, 95.20% accuracy is still very surprising for a million random labels from 1000 categories. Note that we did not do any hyperparameter tuning when switching from the true labels to random labels. It is likely that with some modification of the hyperparameters, perfect accuracy could be achieved on random labels. The network also manages to reach ~90% top-1 accuracy even with explicit regularizers turned on.
1611.03530#14
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
15
Partially corrupted labels We further inspect the behavior of neural network training with a vary- ing level of label corruptions from 0 (no corruption) to 1 (complete random labels) on the CIFAR10 dataset. The networks fit the corrupted training set perfectly for all the cases. Figure 1b shows the slowdown of the convergence time with increasing level of label noises. Figure 1c depicts the test errors after convergence. Since the training errors are always zero, the test errors are the same as generalization errors. As the noise level approaches 1, the generalization errors converge to 90% — the performance of random guessing on CIFAR1O. 2.2 IMPLICATIONS In light of our randomization experiments, we discuss how our findings pose a challenge for several traditional approaches for reasoning about generalization. Rademacher complexity and VC-dimension. Rademacher complexity is commonly used and flexible complexity measure of a hypothesis class. The empirical Rademacher complexity of a hypothesis class H on a dataset {x1,...,2%n} is defined as « iS R,i(H) = E, | sup — ojh(x; qd) n() a> sat i
1611.03530#15
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
16
« iS R,i(H) = E, | sup — ojh(x; qd) n() a> sat i where oj,...,0, € {£1} are iid. uniform random variables. This definition closely resembles our randomization test. Specifically, %,,(H) measures ability of H to fit random +1 binary label assignments. While we consider multiclass problems, it is straightforward to consider related binary classification problems for which the same experimental observations hold. Since our randomization tests suggest that many neural networks fit the training set with random labels perfectly, we expect that Rp (H) = 1 for the corresponding model class . This is, of course, a trivial upper bound on the Rademacher complexity that does not lead to useful generalization bounds in realistic settings. A similar reasoning applies to VC-dimension and its continuous analog fat-shattering dimension, unless we further restrict the network. While Bartlett (1998) proves a bound on the fat-shattering dimension in terms of ¢; norm bounds on the weights of the network, this bound does not apply to the ReLU networks that we consider here. This result was generalized to other norms by Neyshabur et al. (2015), but even these do not seem to explain the generalization behavior that we observe.
1611.03530#16
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
17
Uniform stability. Stepping away from complexity measures of the hypothesis class, we can in- stead consider properties of the algorithm used for training. This is commonly done with some notion of stability, such as uniform stability (Bousquet & Elisseeff, 2002). Uniform stability of an algorithm A measures how sensitive the algorithm is to the replacement of a single example. How- ever, it is solely a property of the algorithm, which does not take into account specifics of the data or the distribution of the labels. It is possible to define weaker notions of stability (Mukherjee et al., 2002; Poggio et al., 2004; Shalev-Shwartz et al., 2010). The weakest stability measure is directly equivalent to bounding generalization error and does take the data into account. However, it has been difficult to utilize this weaker stability notion effectively. # 3. THE ROLE OF REGULARIZATION Most of our randomization tests are performed with explicit regularization turned off. Regularizers are the standard tool in theory and practice to mitigate overfitting in the regime when there are more Table 1: The training and test accuracy (in percentage) of various models on the CIFAR10 dataset. Performance with and without data augmentation and weight decay are compared. The results of fitting random labels are also included.
1611.03530#17
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
18
model #params randomcrop' weightdecay train accuracy _ test accuracy yes yes 00.0 89.05 : yes no 00.0 89.31 Inception 1,649,402 no yes 00.0 86.03 no no 00.0 85.75 (fitting random labels) no no 00.0 9.78 Inception w/o no yes 00.0 83.00 BatchNorm 1,649,402 no no 00.0 82.00 (fitting random labels) no no 00.0 10.12 yes yes 99.90 81.22 yes no 99.82 79.66 Alexnet 1,387,786 no yes 00.0 17.36 no no 00.0 76.07 (fitting random labels) no no 99.82 9.86 no yes 00.0 53.35 MLP 3x512 1,735,178 ho no 00.0 5239 (fitting random labels) no no 00.0 10.48 no yes 99.80 50.39 MLP 1x512 1,209,866 no no 00.0 5051 (fitting random labels) no no 99.34 10.61
1611.03530#18
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
19
parameters than data points (Vapnik, 1998). The basic idea is that although the original hypothesis is too large to generalize well, regularizers help confine learning to a subset of the hypothesis space with manageable complexity. By adding an explicit regularizer, say by penalizing the norm of the optimal solution, the effective Rademacher complexity of the possible solutions is dramatically reduced. As we will see, in deep learning, explicit regularization seems to play a rather different role. As the bottom rows of Table 2 in the appendix show, even with dropout and weight decay, InceptionV3 is still able to fit the random training set extremely well if not perfectly. Although not shown explicitly, on CIFAR1O, both Inception and MLPs still fit perfectly the random training set with weight decay turned on. However, AlexNet with weight decay turned on fails to converge on random labels. To investigate the role of regularization in deep learning, we explicitly compare behavior of deep nets learning with and without regularizers. Instead of doing a full survey of all kinds of regularization techniques introduced for deep learn- ing, we simply take several commonly used network architectures, and compare the behavior when turning off the equipped regularizers. The following regularizers are covered: ¢ Data augmentation: augment the training set via domain-specific transformations. For image data, commonly used transformations include random cropping, random perturba- tion of brightness, saturation, hue and contrast.
1611.03530#19
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
20
¢ Weight decay: equivalent to a ¢) regularizer on the weights; also equivalent to a hard constrain of the weights to an Euclidean ball, with the radius decided by the amount of weight decay. ¢ Dropout (Srivastava et al., 2014): mask out each element of a layer output randomly with a given dropout probability. Only the Inception V3 for ImageNet uses dropout in our experiments. Table 1 shows the results of Inception, Alexnet and MLPs on CIFAR10, toggling the use of data augmentation and weight decay. Both regularization techniques help to improve the generalization
1611.03530#20
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
21
Pr ° 1.0 0.8 0.9 a 0.6 g e=e test(w/ aug, wd, dropout) 3 0.8 0.4 ee train(w/ aug, wd, dropout) % = test(w/o aug, dropout) e=e test(Inception) ®-© train(w/o aug, dropout) 0.7 e—e train(Inception) 0.2 = test(incept BN === test(w/o aug, wd, dropout) est(Inception w/o ) ~~ train(w/o aug, wd, dropout) — _ train(Inception w/o BN) 0.0 0.6 i?) 2000 4000 6000 8000 10000 ie) 5 10 15 20 thousand training steps thousand training steps (a) Inception on ImageNet (b) Inception on CIFAR10 > @ 3 8
1611.03530#21
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
22
> @ 3 8 Figure 2: Effects of implicit regularizers on generalization performance. aug is data augmentation, wd is weight decay, BN is batch normalization. The shaded areas are the cumulative best test ac- curacy, as an indicator of potential performance gain of early stopping. (a) early stopping could potentially improve generalization when other regularizers are absent. (b) early stopping is not nec- essarily helpful on CIFAR10, but batch normalization stablize the training process and improves generalization. performance, but even with all of the regularizers turned off, all of the models still generalize very well. Table 2 in the appendix shows a similar experiment on the ImageNet dataset. A 18% top-1 accuracy drop is observed when we turn off all the regularizers. Specifically, the top-1 accuracy without regularization is 59.80%, while random guessing only achieves 0.1% top-1 accuracy on ImageNet. More strikingly, with data-augmentation on but other explicit regularizers off, Inception is able to achieve a top-1 accuracy of 72.95%. Indeed, it seems like the ability to augment the data using known symmetries is significantly more powerful than just tuning weight decay or preventing low training error.
1611.03530#22
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
23
Inception achieves 80.38% top-5 accuracy without regularization, while the reported number of the winner of ILSVRC 2012 (Krizhevsky et al., 2012) achieved 83.6%. So while regularization is important, bigger gains can be achieved by simply changing the model architecture. It is difficult to say that the regularizers count as a fundamental phase change in the generalization capability of deep nets. # 3.1 IMPLICIT REGULARIZATIONS Early stopping was shown to implicitly regularize on some convex learning problems (Yao et al., 2007; Lin et al., 2016). In Table 2 in the appendix, we show in parentheses the best test accuracy along the training process. It confirms that early stopping could potentially! improve the general- ization performance. Figure 2a shows the training and testing accuracy on ImageNet. The shaded area indicate the accumulative best test accuracy, as a reference of potential performance gain for early stopping. However, on the CIFAR10 dataset, we do not observe any potential benefit of early stopping.
1611.03530#23
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]
1611.03530
24
Batch normalization (loffe & Szegedy, 2015) is an operator that normalizes the layer responses within each mini-batch. It has been widely adopted in many modern neural network architectures such as Inception (Szegedy et al., 2016) and Residual Networks (He et al., 2016). Although not explicitly designed for regularization, batch normalization is usually found to improve the general- ization performance. The Inception architecture uses a lot of batch normalization layers. To test the impact of batch normalization, we create a “Inception w/o BatchNorm” architecture that is exactly the same as Inception in Figure 3, except with all the batch normalization layers removed. Figure 2b 'We say “potentially” because to make this statement rigorous, we need to have another isolated test set and test the performance there when we choose early stopping point on the first test set (acting like a validation set). compares the learning curves of the two variants of Inception on CIFAR10, with all the explicit reg- ularizers turned off. The normalization operator helps stablize the learning dynamics, but the impact on the generalization performance is only 3~4%. The exact accuracy is also listed in the section “Inception w/o BatchNorm” of Table 1.
1611.03530#24
Understanding deep learning requires rethinking generalization
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
http://arxiv.org/pdf/1611.03530
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
cs.LG
Published in ICLR 2017
null
cs.LG
20161110
20170226
[]