doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1511.08099
5
point cards (1 point each). A game consists of a sequence of turns, and each game turn starts with the roll of a die that can make the players obtain resources (depending on the number rolled and resources on the board). The player in turn can trade resources with the bank or through dialogue with other players, and can make use of available resources to build roads, settlements or cities. This game is highly strategic because players often face decisions about when to trade, what resources to request, and what resources to give away—which are influenced by what they need to build. A player can extend build-ups on locations connected to existing pieces, i.e. road, settlement or city, and all settlements and cities must be separated by at least 2 roads. The first player to win 10 victory points wins and all others lose.1
1511.08099#5
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
5
• The same architecture can also learn long binary addition and a number of other algorith- mic tasks, such as counting, copying sequences, reversing them, or duplicating them. 1.1 RELATED WORK The learning of algorithms with neural networks has seen a lot of interest after the success of sequence-to-sequence neural networks on language processing tasks (Sutskever et al., 2014; Bahdanau et al., 2014; Cho et al., 2014). An attempt has even been made to learn to evaluate sim- ple python programs with a pure sequence-to-sequence model (Zaremba & Sutskever, 2015a), but more success was seen with more complex models. Neural Turing Machines (Graves et al., 2014) were shown to learn a number of basic sequence transformations and memory access patterns, and their reinforcement learning variant (Zaremba & Sutskever, 2015b) has reasonable performance on a number of tasks as well. Stack, Queue and DeQueue networks (Grefenstette et al., 2015) were also shown to learn basic sequence transformations such as bigram flipping or sequence reversal.
1511.08228#5
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
6
In this paper, we extend previous work on strategic conversation that has applied supervised or reinforcement learning in that we simultaneously learn the feature representation and dialogue policy by using Deep Reinforcement Learning (DRL). We compare our learnt policies against random, rule-based and supervised baselines, and show that the DRL-based agents perform significantly better than the baselines. # 2 Background
1511.08099#6
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
6
The Grid LSTM (Kalchbrenner et al., 2016) is another powerful architecture that can learn to mul- tiply 15-digit decimal numbers. As we will see in the next section, the Grid-LSTM is quite similar to the Neural GPU – the main difference is that the Neural GPU is less recurrent and is explicitly constructed from the highly parallel convolution operator. In image processing, convolutional LSTMs, an architecture similar to the Neural GPU, have recently been used for weather prediction (Shi et al., 2015) and image compression (Toderici et al., 2016). We find it encouraging as it hints that the Neural GPU might perform well in other contexts. Most comparable to this work are the prior experiments with the stack-augmented RNNs (Joulin & Mikolov, 2015). These networks manage to learn and generalize to unseen lengths on a number of algorithmic tasks. But, as we show in Section 3.1, stack-augmented RNNs trained to add numbers up-to 20-bit long generalize only to ∼ 100-bit numbers, never to 200-bit ones, and never without error. Still, their generalization is the best we were able to obtain without using the Neural GPU and far surpasses a baseline LSTM sequence-to-sequence model with attention.
1511.08228#6
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
7
# 2 Background A Reinforcement Learning (RL) agent learns its behaviour from interaction with an environment and the physical or virtual agents within it, where situations are mapped to actions by maximizing a long-term reward signal [29] [30]. An RL agent is typically characterized by: (i) a finite or infinite set of states S = {s;}; (ii) a finite or infinite set of actions A = {a,;}; (iii) a stochastic state transition function T(s, a, s’) that specifies the next state s’ given the current state s and action a; (iv) a reward function R(s, a, s’) that specifies the reward given to the agent for choosing action a when the environment makes a transition from state s to state s’; and (v) a policy 7 : S — A that defines a mapping from states to actions. The goal of an RL agent is to select actions by maximising its cumulative discounted reward defined as Q*(s,a) = max, E[r; + yriq1 + 7?riq1 + --{St 8, a, = a,7], where function Q* represents the maximum sum of rewards 1; discounted by factor at each time step. While the RL agent takes actions with probability Pr(a|s) during training, it takes the best actions max, Pr(a|s) at test time.
1511.08099#7
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
7
The quest for learning algorithms has been pursued much more widely with tools other than neu- ral networks. It is known under names such as program synthesis, program induction, automatic programming, or inductive synthesis, and has a long history with many works that we do not cover here; see, e.g., Gulwani (2010) and Kitzelmann (2010) for a more general perspective. Since one of our results is the synthesis of an algorithm for long binary addition, let us recall how this problem has been addressed without neural networks. Importantly, there are two cases of this problem with different complexity. The easier case is when the two numbers that are to be added are aligned at input, i.e., if the first (lower-endian) bit of the first number is presented at the same time as the first bit of the second number, then come the second bits, and so on, as depicted below for x = 9 = 8 + 1 and y = 5 = 4 + 1 written in binary with least-significant bit left. 0 0 1 1 1 0 0 1 1 Input (x and y aligned) Desired Output (x + y) 1 0 1 2 Published as a conference paper at ICLR 2016
1511.08228#7
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
8
To induce the Q function above we use Deep Reinforcement Learning as in [22], which approx- imates Q∗ using a multilayer convolutional neural network. The Q function of a DRL agent is parameterised as Q(s, a; θi), where θi are the parameters (weights) of the neural net at iteration i. More specifically, training a DRL agent requires a dataset of experiences D = {e1, ...eN } (also referred to as ‘experience replay memory’), where every experience is described as a tuple # 1www.catan.com/service/game-rules 2 et = (St, @,7t, $441). Inducing the Q function consists in applying Q-learning updates over mini- batches of experience MB = {(s,a,r,s’) ~ U(D)} drawn uniformly at random from the full dataset D.
1511.08099#8
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
8
0 0 1 1 1 0 0 1 1 Input (x and y aligned) Desired Output (x + y) 1 0 1 2 Published as a conference paper at ICLR 2016 In this representation the triples of bits from (x, y, x + y), e.g., (1, 1, 0) (0, 0, 1) (0, 1, 1) (1, 0, 1) as in the figure above, form a regular language. To learn binary addition in this representation it therefore suffices to find a regular expression or an automaton that accepts this language, which can be done with a variant of Anguin’s algorithm (Angluin, 1987). But only few interesting functions have regular representations, as for example long multiplication does not (Blumensath & Gr¨adel, 2000). It is therefore desirable to learn long binary addition without alignment, for example when x and y are provided one after another. This is the representation we use in the present paper. 0 1 0 1 + 1 1 0 1 1 0 1 Input (x, y) Desired Output (x + y) 0 # 2 THE NEURAL GPU
1511.08228#8
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
9
A Q-learning update at iteration i is thus defined as the loss function L;(6;) = Ewe [(r + ymaxa Q(s', a’; 8;) — Q(s,a;6;))?], where 9; are the parameters of the neural net at iteration i, and 6; are the target parameters of the neural net at iteration i. The latter are only up- dated every C' steps. This process is implemented in the learning algorithm Deep Q-Learning with Experience Replay described in # 3 Policy Learning for Strategic Interaction Our approach for strategic interaction optimises two tasks jointly: learning to offer and learning to reply to offers. In addition, our approach learns from constrained search spaces rather than uncon- strained ones, resulting in quicker learning and also in learning from only legal (allowed) decisions. # 3.1 Learning to offer and to reply
1511.08099#9
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
9
0 1 0 1 + 1 1 0 1 1 0 1 Input (x, y) Desired Output (x + y) 0 # 2 THE NEURAL GPU Before we introduce the Neural GPU, let us recall the architecture of a Gated Recurrent Unit (GRU) (Cho et al., 2014). A GRU is similar to an LSTM, but its input and state are the same size, which makes it easier for us to generalize it later; a highway network could have also been used (Srivastava et al., 2015), but it lacks the reset gate. GRUs have shown performance similar to LSTMs on a number of tasks (Chung et al., 2014; Greff et al., 2015). A GRU takes an input vector x and a current state vector s, and outputs: GRU(x, s) = u ⊙ s + (1 − u) ⊙ tanh(W x + U (r ⊙ s) + B), where u = σ(W ′x + U ′s + B′) and r = σ(W ′′x + U ′′s + B′′).
1511.08228#9
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
10
# 3.1 Learning to offer and to reply A strategic agent has to offer a trade to its opponent agents (or players). In the case of the game of Settlers of Catan, an example trading offer is I will give anyone sheep for clay. Several things can be observed from this simple example. First, note that this offer may include multiple givable and receivable resources. Second, note that the offer is addressed to all opponents (as opposed to one opponent in particular, which could also be possible). Third, note that not all offers are allowed at a particular point in the game – they depend on the particular state of the game and resources available to the player for trading. The goal of the agent is to learn to make legal offers that will yield the largest pay-off in the long run. A strategic agent also has to reply to trading offers made by an opponent. In the case of the game of Settlers of Catan, the responses can be narrowed down to (a) accepting the offer, (b) rejecting it, or (c) replying with a counteroffer (e.g. I want two sheep for one clay). Note that this set of responses is available at any point in the game once there is an offer made by any agent (or player). Similarly to the task above, the goal of the agent is to learn to choose a response that will yield the largest pay-off in the long run.
1511.08099#10
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
10
In the equations above, W, W ′, W ′′, U, U ′, U ′′ are matrices and B, B′, B′′ are bias vectors; these are the parameters that will be learned. We write W x for a matrix-vector multiplication and r ⊙ s for elementwise vector multiplication. The vectors u and r are called gates since their elements are in [0, 1] — u is the update gate and r is the reset gate. In recurrent neural networks a unit like GRU is applied at every step and the result is both passed as new state and used to compute the output. In a Neural GPU we do not process a new input in every step. Instead, all inputs are written into the starting state s0. This state has 2-dimensional structure: it consists of w × h vectors of m numbers, i.e., it is a 3-dimensional tensor of shape [w, h, m]. This mental image evolves in time in a way defined by a convolutional gated recurrent unit: CGRU(s) = u ⊙ s + (1 − u) ⊙ tanh(U ∗ (r ⊙ s) + B), where u = σ(U ′ ∗ s + B′) and r = σ(U ′′ ∗ s + B′′).
1511.08228#10
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
11
While one can aim for optimising only one of the tasks above, a joint optimisation of the these two tasks equips an automatic trading agent with more completeness. To do that, given an environment state space S = {si}, trading negotiations Ao, and responses Ar, the goal of a strategic learning agent consists of inducing an optimal policy so that action selection can be defined as π∗(s) = arg max Q∗ a∈{Ao∪Ar}(s, a), where the Q function is estimated as described in the previous section, Ao is the set of trading negotiations in turn, and Ar is the set of responses. # 3.2 Deep Learning from constrained action sets
1511.08099#11
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
11
U ∗ s above denotes the convolution of a kernel bank U with the mental image s. A kernel bank is a 4-dimensional tensor of shape [kw, kh, m, m], i.e., it contains kw · kh · m2 parameters, where kw and kh are kernel width and height. It is applied to a mental image s of shape [w, h, m] which results in another mental image U ∗ s of the same shape defined by: ⌊kw/2⌋ ⌊kh/2⌋ m U ∗ s[x, y, i] = X u=⌊−kw/2⌋ X v=⌊−kh/2⌋ X c=1 s[x + u, y + v, c] · U [u, v, c, i]. In the equation above the index x + u might sometimes be negative or larger than the size of s, and in such cases we assume the value is 0. This corresponds to the standard convolution operator used in convolutional neural networks with zero padding on both sides and stride 1. Using the standard operator has the advantage that it is heavily optimized (see Section 4 for Neural GPU performance). New work on faster convolutions, e.g., Lavin & Gray (2015), can be directly used in a Neural GPU.
1511.08228#11
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
12
# 3.2 Deep Learning from constrained action sets While the behaviour of a strategic agent can be trained as described above, using deep learning with large action sets can be prohibitively expensive in terms of computation time. Our solution to this limitation consists in learning from constrained action sets rather than whole and static action sets. We distinguish two action sets, an action set Ar which contains responses to trading negotiations and remains static, and an action set Ao which contains those trading negotiations that are valid at any given point in the game (i.e. which the player is able to make due to the resources that they hold). We refer to the latter action set as ¯Ao, which contains a dynamic set | ¯Ao| ≤ |Ao| of trading negotiations available according to the game state and available resources (e.g. the agent would not offer a particular resource if it does not have it). Thus, we reformulate the goal of a strategic learning agent as inducing an optimal policy so that action selection can be defined as π∗(s) = arg max Q∗ a∈ ¯Ao∪Ar (s, a), where the Q function is still estimated as described in Section 2, ¯Ao is the constrained set of trading negotiations in turn (i.e. legal offers), and Ar is the set of responses. Note that the size of ¯Ao will vary depending on the game state. 3
1511.08099#12
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
12
Knowing how a CGRU gate works, the definition of a l-layer Neural GPU is simple, as depicted in Figure 1. The given sequence i = (i1, . . . , in) of n discrete symbols from {0, . . . , I} is first em- bedded into the mental image s0 by concatenating the vectors obtained from an embedding lookup of the input symbols into its first column. More precisely, we create the starting mental image s0 of shape [w, n, m] by using an embedding matrix E of shape [I, m] and setting s0[0, k, :] = E[ik] (in python notation) for all k = 1 . . . n (here i1, . . . , in is the input). All other elements of s0 are set to 0. Then, we apply l different CGRU gates in turn for n steps to produce the final mental image sfin: st+1 = CGRUl(CGRUl−1 . . . CGRU1(st) . . .) and sfin = sn. 3 Published as a conference paper at ICLR 2016
1511.08228#12
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
13
3 # 4 Experiments and Results In this section we apply the approach above to conversational agents that learn to offer and to reply in the game of Settlers of Catan. # 4.1 Experimental Setting Game Environment Deep Reinforcement Learning Agent Fully-connected Output hidden e io i \ ie e:'"; ES he Wi Figure 1: Integrated system of the Deep Reinforcement Learning (DRL) agent for strategic interac- tion. (left) GUI of the board game “Settlers of Catan” [33]. (right) Multilayer neural network of the DRL agent–see text for details. # 4.1.1 Integrated learning environment
1511.08099#13
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
13
3 Published as a conference paper at ICLR 2016 i1 ... CGRU1 CGRU2 . . . CGRU1 CGRU2 in s0 s1 sn−1 sn o1 ... on Figure 1: Neural GPU with 2 layers and width w = 3 unfolded in time. The result of a Neural GPU is produced by multiplying each item in the first column of sfin by an output matrix O to obtain the logits lk = Osfin[0, k, :] and then selecting the maximal one: ok = argmax(lk). During training we use the standard loss function, i.e., we compute a softmax over the logits lk and use the negative log probability of the target as the loss. Since all components of a Neural GPU are clearly differentiable, we can train using any stochastic gradient descent optimizer. For the results presented in this paper we used the Adam optimizer (Kingma & Ba, 2014) with ε = 10−4 and gradients norm clipped to 1. The number of layers was set to l = 2, the width of mental images was constant at w = 4, the number of maps in each mental image point was m = 24, and the convolution kernels width and height was always kw = kh = 3.
1511.08228#13
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
14
# 4.1.1 Integrated learning environment Figure 1(left) shows our integrated learning environment. On the left-hand side, the JSettlers bench- mark framework [33] receives an action (trading offer or response) and outputs the next game state and numerical reward. On the right-hand side, a Deep Reinforcement Learning (DRL) agent re- ceives the state and reward, updates its policy during learning, and outputs an action following its learnt policy. Our integrated system is based on a multi-threaded implementation, where each player makes use of a synchronised thread. In addition, this system runs under a client-server architecture, where the learning agent acts as the ‘server’ and the game acts as the ‘client’. They communicate by exchanging messages, where the server tells the client the action to execute, and the client tells the server the game state and reward observed. Our DRL agents are based on the ConvNetJS tool [15], which implements the algorithm ‘Deep Q-Learning with experience replay’ proposed by [22]. We extended this tool to support multi-threaded and client-server processing with constrained search spaces.2 # 4.1.2 Characterisation of the learning agent
1511.08099#14
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
14
Computational power of Neural GPUs. While the above definition is simple, it might not be immediately obvious what kind of functions a Neural GPU can compute. Why can we expect it to be able to perform long multiplication? To answer such questions it is useful to draw an analogy between a Neural GPU and a discrete 2-dimensional cellular automaton. Except for being discrete and the lack of a gating mechanism, such automata are quite similar to Neural GPUs. Of course, these are large exceptions. Dense representations have often more capacity than purely discrete states and the gating mechanism is crucial to avoid vanishing gradients during training. But the computational power of cellular automata is much better understood. In particular, it is well known that a cellular automaton can exploit its parallelism to multiply two n-bit numbers in O(n) steps using Atrubin’s algorithm. We recommend the online book (Vivien, 2003) to get an understanding of this algorithm and the computational power of cellular automata. # 3 EXPERIMENTS In this section, we present experiments showing that a Neural GPU can successfully learn a number of algorithmic tasks and generalize well beyond the lengths that it was trained on. We start with the two tasks we focused on, long binary addition and long binary multiplication. Then, to demonstrate the generality of the model, we show that Neural GPUs perform well on several other tasks as well.
1511.08228#14
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
15
# 4.1.2 Characterisation of the learning agent The state space S = {si} of our learning agent includes 160 non-binary features that describe the game board and the available resources. Table 1 describes the state variables that represent the input nodes, which we normalise to the range [0..1]. These features represent a high-dimensional state space—only approachable via reinforcement learning with function approximation. 2The code of this substantial extension with an illustrative dialogue system is available at the following link: https://github.com/cuayahuitl/SimpleDS 4
1511.08099#15
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
15
3.1 ADDITION AND MULTIPLICATION The two core tasks on which we study the performance of Neural GPUs are long binary addition and long binary multiplication. We chose them because they are fundamental tasks and because there is no known linear-time algorithm for long multiplication. As described in Section 2, we input a sequence of discrete symbols into the network and we read out a sequence of symbols again. For binary addition, we use a set of 4 symbols: {0, 1, +, PAD} and for multiplication we use {0, 1, ·, PAD}. The PAD symbol is only used for padding so we depict it as empty space below. Long binary addition (badd) is the task of adding two numbers represented lower-endian in binary notation. We always add numbers of the same length, but we allow them to have 0s at start, so numbers of differing lengths can be padded to equal size. Given two d-bit numbers the full sequence length is n = 2d + 1, as seen in the example below, representing (1 + 4) + (2 + 4 + 8) = 5 + 14 = 19 = (16 + 2 + 1). 4 Published as a conference paper at ICLR 2016
1511.08228#15
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
16
2The code of this substantial extension with an illustrative dialogue system is available at the following link: https://github.com/cuayahuitl/SimpleDS 4 Num. 1 1 1 1 1 19 54 80 Feature hasClay hasOre hasSheep hasWheat hasWood hexes nodes edges Domain {0...10} {0...10} {0...10} {0...10} {0...10} {0...5} {0...4} {0...2} Description Number of clay units available Number of ore units available Number of sheep units available Number of wheat units available Number of wood units available Type of resource: 0=desert,1=clay,2=ore, 3=sheep,4=wheat,5=wood Where builds are located: 0=no settlement or city, 1 and 2=opponent builds, 3 and 4=agent builds Where roads are located: 0=no road in given edge, 1=opponent road, 2=agent road On type of resource: 0=desert,1=clay,2=ore, 3=sheep,4=wheat,5=wood 1 1 robber turns {0...5} {0..100} Number of turns of the game so far Table 1: Feature set (size=160) of the DRL agent for trading in the game of Settlers of Catan
1511.08099#16
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
16
4 Published as a conference paper at ICLR 2016 Neural GPU 100% 100% 100% 100% 100% 100% 100% 100% 100% Task@Bits badd@20 badd@25 badd@100 badd@200 badd@2000 bmul@20 bmul@25 bmul@200 bmul@2000 stackRNN LSTM+A 100% 73% 0% 0% 0% 0% 0% 0% 0% 100% 100% 88% 0% 0% N/A N/A N/A N/A Table 1: Neural GPU, stackRNN, and LSTM+A results on addition and multiplication. The table shows the fraction of test cases for which every single bit of the model’s output is correct. 1 0 + 1 0 0 1 0 0 1 1 1 1 Input Output 1 Long binary multiplication (bmul) is the task of multiplying two binary numbers, represented lower-endian. Again, we always multiply numbers of the same length, but we allow them to have 0s at start, so numbers of differing lengths can be padded to equal size. Given two d-bit numbers, the full sequence length is again n = 2d+1, as seen in the example below, representing (2+4)·(2+8) = 6 · 10 = 60 = 32 + 16 + 8 + 4.
1511.08228#16
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
17
Table 1: Feature set (size=160) of the DRL agent for trading in the game of Settlers of Catan The action space A = {ai} of our learning agents includes 70 actions for offering trading negotia- tions3 and 3 actions4 for replying to offers from opponents. Notice that our offer actions only make use of up to two givable resources and only one receivable resource is considered. The state transition function of our agents is based on the game itself using the JSettlers framework [33]. In addition, our strategic interactions were carried out at the semantic level rather than at the word level, for example: S4C is a higher-level representation of “I will give you sheep for clay”. Furthermore, our trained agents were active only during the selection of trading offers and reply to offers, the functionality of the rest of the game was based on the JSettlers framework. The reward function of our agent is based on the game points provided by the JSettlers framework, but we make a distinction between reply actions and offer actions. This is due to the fact that we consider reply actions as high-level actions, and offer actions as lower-level ones. Our reward function is defined as: GainedPoints x Wg, if GainedPoints > 0 r = T otalP oints × wtp otherwise,
1511.08099#17
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
17
0 0 0 1 1 1 1 0 0 0 Input Output 1 · 1 1 Models. We compare three different models on the above tasks. In addition to the Neural GPU we include a baseline LSTM recurrent neural network with an attention mechanism. We call this model LSTM+A as it is exactly the same as described in (Vinyals & Kaiser et al., 2015). It is a 3-layer model with 64 units in each LSTM cell in each layer, which results in about 200k param- eters (the Neural GPU uses m = 24 and has about 30k paramters). Both the Neural GPU and the LSTM+A baseline were trained using all the techniques described below, including curriculum training and gradient noise. Finally, on binary addition, we also include the stack-RNN model from (Joulin & Mikolov, 2015). This model was not trained using our training regime, but in exactly the way as provided in its source code, only with nmax = 41. To match our training procedure, we ran it 729 times (cf. Section 3.3) with different random seeds and we report the best obtained result.
1511.08228#17
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
18
GainedPoints x Wg, if GainedPoints > 0 r = T otalP oints × wtp otherwise, where GainedP oints=points at time t minus the points at time t − 1, and T otalP oints refers to the accumulated number of points of the trained agent during the game. We used the following weights for reply actions: {wgp = 1, wtp = 0.1}, and the following for offer actions: {wgp = 0.1, wtp = 0.01}. The model architecture consists of a fully-connected multilayer neural network with 160 nodes in the input layer (see Table 1), 50 nodes in the first hidden layer, 50 nodes in the second hidden layer, and 73 nodes (action set) in the output layer. The hidden layers use RELU (Rectified Linear Units) activation functions to normalise their weights, see [23] for details. Finally, the learning parameters are as follows: experience replay size=30K, discount factor=0.7, minimum epsilon=0.05, learning rate=0.001, and batch size=64. A comprehensive analysis comparing multiple state representations, action sets, reward functions and learning parameters is left for future work. # 4.2 Experimental Results
1511.08099#18
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
18
Results. We measure also the rate of fully correct output sequences and report the results in Ta- ble 1. For both tasks, we show first the error at the maximum length seen during training, i.e., for 20-bit numbers. Note that LSTM+A is not able to learn long binary multiplication at this length, it does not even fit the training data. Then we report numbers for sizes not seen during training. As you can see, a Neural GPU can learn a multiplication algorithm that generalizes perfectly, at least as far as we were able to test (technical limits of our implementation prevented us from testing much above 2000 bits). Even for the simpler task of binary addition, stack-RNNs work only up-to length 100. This is still much better than the LSTM+A baseline which only generalizes to length 25. 3.2 OTHER ALGORITHMIC TASKS In addition to the two main tasks above, we tested Neural GPUs on the following simpler algorithmic tasks. The same architecture as used above was able to solve all of the tasks described below, i.e., after being trained on sequences of length up-to 41 we were not able to find any error on sequences on any length we tested (up-to 4001). Copying sequences very easy for a Neural GPU, in fact all models converge quickly and generalize perfectly.
1511.08228#18
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08228
19
Copying sequences very easy for a Neural GPU, in fact all models converge quickly and generalize perfectly. Reversing sequences is the task of reversing a sequence of bits, n is the length of the sequence. 5 Published as a conference paper at ICLR 2016 Duplicating sequences is the task of duplicating the input bit sequence on output twice, as in the example below. We use the padding symbol on input to make it match the output length. We trained on sequences of inputs up-to 20 bits, so outputs were up-to 40-bits long, and tested on inputs up-to 2000 bits long. Input Output 1 1 0 0 0 0 1 1 0 0 1 1 is the task of sorting the input bit sequence on output. Since there are Counting by sorting bits only 2 symbols to sort, this is a counting tasks – the network must count how many 0s are in the input and produce the output accordingly, as in the example below. 0 1 0 1 1 0 1 0 0 0 1 0 Input Output 1 1 0 1 3.3 TRAINING TECHNIQUES Here we describe the training methods that we used to improve our results. Note that we applied these methods to the LSTM+A baseline as well, to keep the above comparison fair. We focus on the most important elements of our training regime, all less relevant details can be found in the code which is released as open-source.1
1511.08228#19
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
20
3Trading negotiation actions, where C=clay, O=ore, S=sheep, W =wheat, and D = wood: C4D, C4O, C4S, C4W, CC4D, CC4O, CC4S, CC4W, CD4O, CD4S, CD4W, CO4D, CO4S, CO4W, CS4D, CS4O, CS4W, CW4D, CW4O, CW4S, D4C, D4O, D4S, D4W, DD4C, DD4O, DD4S, DD4W, O4C, O4D, O4S, O4W, OD4C, OD4S, OD4W, OO4C, OO4D, OO4S, OO4W, OS4C, OS4D, OS4W, OW4C, OW4D, OW4S, S4C, S4D, S4O, S4W, SD4C, SD4O, SD4W, SS4C, SS4D, SS4O, SS4W, SW4C, SW4D, SW4O, W4C, W4D, W4O, W4S, WD4C, WD4O, WD4S,
1511.08099#20
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
20
Grid search. Each result we report is obtained by running a grid search over 36 = 729 instances. We consider 3 settings of the learning rate, initial parameters scale, and 4 other hyperparameters discussed below: the relaxation pull factor, curriculum progress threshold, gradient noise scale, and dropout. An important effect of running this grid search is also that we train 729 models with differ- ent random seeds every time. Usually only a few of these models generalize to 2000-bit numbers, but a significant fraction works well on 200-bit numbers, as discussed below. Curriculum learning. We use a curriculum learning approach inspired by Zaremba & Sutskever (2015a). This means that we train, e.g., on 7-digit numbers only after crossing a curriculum progress threshold (e.g., over 90% fully correct outputs) on 6-digit numbers. However, with 20% probability we pick a minibatch of d-digit numbers with d chosen uniformly at random between 1 and 20.
1511.08228#20
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08228
21
Gradients noise. To improve training speed and stability we add noise to gradients in each training step. Inspired by the schedule from Welling & Teh (2011), we add to gradients a noise drawn from the normal distribution with mean 0 and variance inversely proportional to the square root of step- number (i.e., with standard deviation proportional to the 4-th root of step-number). We multiply this noise by the gradient noise scale and, to avoid noise in converged models, we also multiply it by the fraction of non-fully-correct outputs (which is 0 for a perfect model). In Section 2 we defined the gates in a CGRU using the sigmoid function, e.g., we Gate cutoff. wrote u = σ(U ′ ∗ s + B′). Usually the standard sigmoid function is used, σ(x) = 1 1+e−x . We found that adding a hard threshold on the top and bottom helps slightly in our setting, so we use 1.2σ(x) − 0.1 cut to the interval [0, 1], i.e., σ′(x) = max(0, min(1, 1.2σ(x) − 0.1)).
1511.08228#21
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08228
22
# 3.3.1 DROPOUT ON RECURRENT CONNECTIONS Dropout is a widely applied technique for regularizing neural networks. But when applying it to recurrent networks, it has been counter-productive to apply it on recurrent connections – it only worked when applied to the non-recurrent ones, as reported by Pham et al. (2014). Since a Neural GPU does not have non-recurrent connections it might seem that dropout will not be useful for this architecture. Surprisingly, we found the contrary – it is useful and improves generalization. The key to using dropout effectively in this setting is to set a small dropout rate. When we run a grid search for dropout rates we vary them between 6%, 9%, and 13.5%, meaning that over 85% of the values are always preserved. It turns out that even this small dropout has large 1The code is at https://github.com/tensorflow/models/tree/master/neural_gpu. 6 Published as a conference paper at ICLR 2016 effect since we apply it to the whole mental image si in each step i. Presumably the network now learns to include some redundancy in its internal representation and generalization benefits from it.
1511.08228#22
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
23
Comparison between Agents 1 Ran vs. 3 Heu 1 Ran vs. 3 Sup 1 Heu vs. 3 Ran 1 Heu vs. 3 Heu 1 Sup vs. 3 Ran 1 Sup vs. 3 Heu 1 DRLran vs. 3 Ran 1 DRLran vs. 3 Heu 1 DRLran vs. 3 Sup 1 DRLheu vs. 3 Ran 1 DRLheu vs. 3 Heu 1 DRLheu vs. 3 Sup 1 DRLsup vs. 3 Ran 1 DRLsup vs. 3 Heu 1 DRLsup vs. 3 Sup Winning Victory Offers Points Made Rate(%) 133.72 2.58 00.01 143.19 00.01 2.74 41.63 10.15 98.46 25.24 149.74 6.46 45.61 10.13 97.30 27.36 144.53 6.48 38.52 10.16 98.31 144.82 8.06 49.49 154.00 39.64 7.51 38.54 10.17 98.23 53.36 146.84 8.22 157.28 41.97 7.65 38.26 10.15 98.52 150.62 8.14 50.29 41.58 156.37 7.64 Successful Offers 122.69 131.28 18.19 140.17 19.89 134.72
1511.08099#23
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
23
effect since we apply it to the whole mental image si in each step i. Presumably the network now learns to include some redundancy in its internal representation and generalization benefits from it. Without dropout we usually see only a few models from a 729 grid search generalize reasonably, while with dropout it is a much larger fraction and they generalize to higher lengths. In particular, dropout was necessary to train models for multiplication that generalize to 2000 bits. 3.3.2 PARAMETER SHARING RELAXATION. To improve optimization of our deep network we use a relaxation technique for shared parameters Instead of training with parameters shared across time-steps we use r which works as follows. identical sets of non-shared parameters (we often use r = 6, larger numbers work better but use more memory). At time-step t of the Neural GPU we use the i-th set if t mod r = i.
1511.08228#23
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
24
150.62 8.14 50.29 41.58 156.37 7.64 Successful Offers 122.69 131.28 18.19 140.17 19.89 134.72 17.08 137.13 146.18 16.98 139.12 149.26 16.80 142.66 147.90 Total Trades 140.58 150.97 167.12 282.33 175.38 269.64 203.19 353.98 364.64 194.68 343.29 355.88 193.31 348.14 356.27 Pieces Built 1.76 2.31 13.81 8.48 13.84 8.44 13.98 11.04 10.36 13.85 11.37 10.59 13.81 11.31 10.70 Cards Bought 0.73 0.73 0.24 0.29 0.24 0.30 0.22 0.27 0.29 0.23 0.27 0.30 0.23 0.28 0.30 Turns p/Game 56.35 57.98 45.59 62.25 47.97 62.26 45.34 62.72 62.62 44.35 61.46 62.04 43.88 62.59 62.73
1511.08099#24
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
24
The procedure described above relaxes the network, as it can now perform different operations in different time-steps. Training becomes easier, but we now have r parameters instead of the single shared set we want. To unify them we add a term to the cost function representing the distance of each parameter from the average of this parameter in all the r sets. This term in the final cost function is multiplied by a scalar which we call the relaxation pull. If the relaxation pull is 0, the network behaves as if the r parameter sets were separate, but when it is large, the cost forces the network to unify the parameters across different set. During training, we gradually increase the relaxation pull. We start with a small value and every time the curriculum makes progress, e.g., when the model performs well on 6-digit numbers, we multiply the relaxation pull by a relaxation pull factor. When the curriculum reaches the maximal length we average the parameters from all sets and continue to train with a single shared parameter set. This method is crucial for learning multiplication. Without it, a Neural GPU with m = 24 has trouble to even fit the training set, and the few models that manage to do it do not generalize. With relaxation almost all models in our 729 runs manage to fit the training data. # 4 DISCUSSION
1511.08228#24
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
25
Table 2: Evaluation results comparing Deep Reinforcement Learners (DRL) vs. 3 baseline traders (random, heuristic, supervised). Columns 2-7, show average results—of the player at the most left— over 10K test games. Notation: DRLran=DRL agent trained vs. random behaviour, DRLheu=DRL agent trained vs. heuristic opponents, and DRLsup=DRL agent trained vs. supervised opponents. • Ran: This agent chooses trading negotiation offers randomly, and replies to offers from opponents also in a random fashion. Although this is a weak baseline, we use it to analyse the impact of policies trained (and tested) against random behaviour. • Heu: This agent chooses trading negotiation offers and replies to offers from opponents as dic- tated by the heuristic bots included in the JSettlers framework5, see [34, 13] for details.
1511.08099#25
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
25
# 4 DISCUSSION We prepared a video of the Neural GPU trained to solve the tasks mentioned above.2. It shows the state in each step with values of −1 drawn in white, 1 in black, and other in gray. This gives an intuition how the Neural GPU solves the discussed problems, e.g., it is quite clear that for the duplication task the Neural GPU learned to move a part of the embedding downwards in each step. What did not work well? For one, using decimal inputs degrades performance. All tasks above can easily be formulated with decimal inputs instead of binary ones. One could hope that a Neural GPU will work well in this case too, maybe with a larger m. We experimented with this formulation and our results were worse than when the representation was binary: we did not manage to learn long decimal multiplication. Increasing m to 128 allows to learn all other tasks in the decimal setting.
1511.08228#25
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
26
• Sup: This agent chooses trading negotiation offers using a random forest classifier [3, 14], and replies to offers from opponents using the heuristic behaviour above. This agent was trained from 32 games played between 56 different human players—labelled by multiple annota- tors. We compute the probability distribution of a human-like trade as P (givable|evidence) = 1 b∈B Pb(givable|evidence), where givable refers to the class prediction (in our case, the Z givable resource), evidence refers to observed features6, Pb(.|.) is the posterior distribution of the bth tree, and Z is a normalisation constant [4]. This classifier used 100 decision trees. As- suming that Y is a set of givables at a particular point in time in the game, extracting the most human-like trading offer (givable y∗) given collected evidence (context of the game), is defined as y∗ = arg maxy∈Y P r(y|evidence). The classification accuracy of this statistical classifier was 65.7%—according to a 10-fold cross-validation evaluation [5, 6].
1511.08099#26
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
26
Another problem is that often only a few models in a 729 grid search generalize to very long unseen instances. Among those 729 models, there usually are many models that generalize to 40 or even 200 bits, but only a few working without error for 2000-bit numbers. Using dropout and gradient noise improves the reliability of training and generalization, but maybe another technique could help even more. How could we make more models achieve good generalization? One idea that looks natural is to try to reduce the number of parameters by decreasing m. Surprisingly, this does not seem to have any influence. In addition to the m = 24 results presented above we ran experiments with m = 32, 64, 128 and the results were similar. In fact using m = 128 we got the most models to generalize. Additionally, we observed that ensembling a few models, just by averaging their outputs, helps to generalize: ensembles of 5 models almost always generalize perfectly on binary tasks.
1511.08228#26
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
27
We trained three DRL agents against random, heuristic and supervised opponents—see Figure 2, which used 500K training experiences (around 2000 games each learning curve). We evaluate the learnt policies according to a cross-evaluation using the following metrics in terms of averages per game (using 10 thousand test games per comparison): win-rate, victory points, (successful) offers, total trades, pieces built, cards bought, and number of turns. Our observations of the cross- evaluation, reported in Table 2, are as follows: 1. The DRL agents acquire very competitive strategic behaviour in comparison to the other types of agents—they simply win substantially more than their opponents. While random behaviour is easy to beat with over 98% win-rate, the DRL agents achieve over 50% of win-rate against heuristic opponents and over 40% against supervised opponents. These results substantially out- perform the heuristic and supervised agents which achieve less than 30% of win-rate (at p < 0.05 according to a two-tailed Wilcoxon-Signed Rank Test). 5The baseline trading agent referred to as ‘heuristic’ included the following parameters, see [13]: TRY N BEST BUILD PLANS:0, FAVOUR DEV CARDS:-5. 6Evidence: Number of resources available, number of builds (roads, settlements and cities), and the resource received. 6
1511.08099#27
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
27
Why use width? The Neural GPU is defined using two-dimensional convolutions and in our exper- iments one of the dimensions is always set to 4. Doing so is not necessary since a one-dimensional Neural GPU that uses four times larger m can represent every function representable by the original one. In fact we trained a model for long binary multiplication that generalized to 2000-bit numbers using a Neural GPU with width 1 and m = 64. However, the width of the Neural GPU increases the 2The video is available at https://www.youtube.com/watch?v=LzC8NkTZAF4 7 Published as a conference paper at ICLR 2016 amount of information carried in its hidden state without increasing the number of its parameters. Thus it can be thought of as a factorization and might be useful for other tasks. Speed and data efficiency. Neural GPUs use the standard, heavily optimized convolution operation and are fast. We experimented with a 2-layer Neural GPU for n = 32 and m = 64. After unfolding in time it has 128 layers of CGRUs, each operating on 32 mental images, each 4 × 64 × 64 . The joint forward-backward step time for this network was about 0.6s on an NVIDIA GTX 970 GPU.
1511.08228#27
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
28
6Evidence: Number of resources available, number of builds (roads, settlements and cities), and the resource received. 6 DRL Agent vs. Heuristic Opponents S B 0.35 ° ene ba @ 0.15 Average Reward 2 0.05 i i i i 0 100000 200000 300000 400000 500000 Learning Steps (no. experiences) DRL Agent vs. Supervised Opponents T Average Reward coe 9 vn & & & Ld i i i i 0 100000 200000 300000 400000 500000 Learning Steps (no. experiences) ° DRL Agent vs. Random Behaviour ot i i i i 0 100000 200000 300000 400000 500000 Learning Steps (no. experiences) Figure 2: Learning curves of Deep Reinforcement Learners (DRLs) against random, heuristic and supervised opponents. It can be observed that DRL agents can learn from different types of opponents—even from randomly behaving ones.
1511.08099#28
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
28
We were also surprised by how data-efficient a Neural GPU can be. The experiments presented above were all performed using 10k random training data examples for each training length. Since we train on up-to 20-bit numbers this adds to about 200k training examples. We tried to train using only 100 examples per length, so about 2000 total training instances. We were surprised to see that it actually worked well for binary addition: there were models that generalized well to 200-bit numbers and to all lengths below despite such small training set. But we never managed to train a good model for binary multiplication with that little training data. # 5 CONCLUSIONS AND FUTURE WORK The results presented in Table 1 show clearly that there is a qualitative difference between what can be achieved with a Neural GPU and what was possible with previous architectures. In particular, for the first time, we show a neural network that learns a non-trivial superlinear-time algorithm in a way that generalized to much higher lengths without errors.
1511.08228#28
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
29
2. The DRL agents outperform the baselines not just in win-rates but also in other metrics such as average victory points, pieces built and total trades. The latter is more prominent, for example, while the heuristic and supervised agents achieve between 270 to 280 trades per game, the DRL agents compared against heuristic and supervised agents achieve between 340 and 360 trades. This means that the DRL agents tend to trade more than their opponents, i.e. they accept more offered trading negotiations. These differences suggest that knowing when to accept, reject or counter offer a trading negotiation is crucial for winning. 3. Training a DRL agent in the environment where it will be tested is better than training and testing across environments. For example, DRLheu versus heuristic behaviour is better (53.4% win- rate) than DRLsup versus heuristic behaviour (50.3% win-rate). However, our results report that DRL agents trained using randomly behaving opponents are almost as good as those trained with stronger opponents. This suggests that DRL agents for strategic interaction can be also be trained without highly skilled opponents, presumably by tracking their rewards over time. 4. The DRL agents find the supervised agent harder to beat. This is because the supervised agent is the strongest baseline, which achieves the best winning rate of the baseline agents. It can be noted that the DRL agents versus supervised behaviour make more offers and trade more than 7
1511.08099#29
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
29
This opens the way to use neural networks in domains that were previously only addressed by discrete methods, such as program synthesis. With the surprising data efficiency of Neural GPUs it could even be possible to replicate previous program synthesis results, e.g., Kaiser (2012), but in a more scalable way. It is also interesting that a Neural GPU can learn symbolic algorithms without using any discrete state at all, and adding dropout and noise only improves its performance. Another promising future work is to apply Neural GPUs to language processing tasks. Good results have already been obtained on translation with a convolutional architecture over words (Kalchbrenner & Blunsom, 2013) and adding gating and recursion, like in a Neural GPU, should allow to train much deeper models without overfitting. Finally, the parameter sharing relaxation technique can be applied to any deep recurrent network and has the potential to improve RNN train- ing in general. # REFERENCES Angluin, Dana. Learning regaular sets from queries and counterexamples. Information and Computation, 75: 87–106, 1987. Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014. URL http://arxiv.org/abs/1409.0473.
1511.08228#29
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
30
7 the DRL agents versus heuristic behaviour. We can infer from this result that knowing when to offer and when to trade seem crucial for better winning rates. 5. The fact that the agent with random behaviour hardly wins any games, suggests that sequential decision-making in this strategic game is far from trivial. In summary, strategic dialogue agents trained with deep reinforcement learning have the potential to acquire highly competitive behaviour, not just from training against strong opponents but even from opponents with random behaviour. This result may help to reduce the resources (heuristics or labelled data) required for training future strategic agents. # 5 Related Work
1511.08099#30
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
30
Blumensath, Achim and Gr¨adel, Erich. Automatic Structures. In Proceedings of LICS 2000, pp. 51–62, 2000. URL http://www.logic.rwth-aachen.de/pub/graedel/BlGr-lics00.ps. Chan, William, Jaitly, Navdeep, Le, Quoc V., and Vinyals, Oriol. Listen, attend and spell. In International Conference on Acoustics, Speech and Signal Processing, ICASSP’16, 2016. Cho, Kyunghyun, van Merrienboer, Bart, Gulcehre, Caglar, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. Learning phrase representations using rnn encoder-decoder for statistical machine translation. CoRR, abs/1406.1078, 2014. URL http://arxiv.org/abs/1406.1078. Chung, Junyoung, G¨ulc¸ehre, C¸ aglar, Cho, Kyunghyun, and Bengio, Yoshua. Empirical evaluation URL of gated recurrent neural networks on sequence modeling. http://arxiv.org/abs/1412.3555. CoRR, abs/1412.3555, 2014.
1511.08228#30
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
31
Reinforcement learning applied to strategic interaction includes the following. [32] proposes re- inforcement learning with multilayer neural networks for training an agent to play the game of Backgammon. He finds that agents trained with such an approach are able to match and even beat human performance. [26] proposes hierarchical reinforcement learning for automatic decision mak- ing on object-placing and trading actions in the game of Settlers of Catan. He incorporates built- in knowledge for learning the behaviours of the game quicker, and finds that the combination of learned and built-in knowledge is able to beat human players. [11] used reinforcement learning in non-cooperative dialogue, and focus on a small 2-player trading problem with 3 resource types, but without using any real human dialogue data. This work showed that explicit manipulation moves (e.g. “I really need sheep”) can be used to win when playing against adversaries who are gullible (i.e. they believe such statements) but also against adversaries who can detect manipulation and can punish the player for being manipulative [10]. More recently, [16] designed an MDP model for se- lecting trade
1511.08099#31
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
31
Dahl, George E., Yu, Dong, Deng, Li, and Acero, Alex. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech & Language Processing, 20 (1):30–42, 2012. Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. CoRR, abs/1410.5401, 2014. URL http://arxiv.org/abs/1410.5401. 8 Published as a conference paper at ICLR 2016 Grefenstette, Edward, Hermann, Karl Moritz, Learning to transduce with unbounded memory. http://arxiv.org/abs/1506.02516. Suleyman, Mustafa, and Blunsom, CoRR, abs/1506.02516, 2015. Phil. URL Greff, Klaus, Srivastava, Rupesh Kumar, Koutn´ık, Jan, Steunebrink, Bas R., and Schmidhuber, J¨urgen. LSTM: A search space odyssey. CoRR, abs/1503.04069, 2015. URL http://arxiv.org/abs/1503.04069.
1511.08228#31
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
32
who can detect manipulation and can punish the player for being manipulative [10]. More recently, [16] designed an MDP model for se- lecting trade offers, trained and evaluated within the full jSettlers environment (4 players, 5 resource types). In comparison to the DRL model, it had a much more restricted state-action space, leading to significant, but more modest improvements over supervised learning and hand-coded baselines.
1511.08099#32
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
32
Gulwani, Sumit. Dimensions in program synthesis. In Proceedings of PPDP 2010, PPDP ’10, pp. 13–24, 2010. Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. Joulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurrent nets. CoRR, abs/1503.01007, 2015. URL http://arxiv.org/abs/1503.01007. Kaiser, Łukasz. Learning games from videos guided by descriptive complexity. In Proceedings of the AAAI-12, pp. 963–970. AAAI Press, 2012. URL http://goo.gl/mRbfV5. Kalchbrenner, Nal and Blunsom, Phil. Recurrent continuous translation models. In Proceedings EMNLP 2013, pp. 1700–1709, 2013. URL http://nal.co/papers/KalchbrennerBlunsom_EMNLP13.
1511.08228#32
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
33
Other related work has been carried out in the context of automated non-cooperative dialogue sys- tems, where an agent may act to satisfy its own goals rather than those of other participants [12]. The game-theoretic underpinnings of non-cooperative behaviour have also been investigated [1]. Such automated agents are of interest when trying to persuade, argue, or debate, or in the area of believable characters in video games and educational simulations [12, 28]. Another arena in which strategic conversational behaviour has been investigated is negotiation [35], where hiding information (and even outright lying) can be advantageous.
1511.08099#33
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
33
Kalchbrenner, Nal, Danihelka, Ivo, and Graves, Alex. Grid long short-term memory. In International Confer- ence on Learning Representations, 2016. URL http://arxiv.org/abs/1507.01526. Kingma, Diederik P. and Ba, Jimmy. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980. Kitzelmann, Emanuel. Inductive programming: A survey of program synthesis techniques. In Approaches and Applications of Inductive Programming, AAIP 2009, volume 5812 of LNCS, pp. 50–73, 2010. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey. Imagenet classification with deep convolutional neural network. In Advances in Neural Information Processing Systems, 2012. Lavin, Andrew and Gray, Scott. Fast algorithms for convolutional neural networks. CoRR, abs/1509.09308, 2015. URL http://arxiv.org/abs/1509.09308.
1511.08228#33
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
34
Recent work on deep learning applied to games include the following. [19] train a deep convo- lutional network for the game of Go, but it is trained in a supervised fashion rather than trained to maximise a long-term reward as in this work. A closely related work to ours is a DRL agent for text-based games [24]. Their states are based on words, their policies are induced using game- based rewards, and their actions are based on directions such as ‘go east/west/south/north’. Another closely related work to ours is DRL agents trained to play ATARI games [21]. Their states are based on pixels from down-sampled images, their policies make use of game-based rewards, and their actions are based on joystick movements. In contrast to these previous works which are based on navigation commands, our agents are use trading dialogue moves (e.g. ‘I will give you ore and sheep for clay’, or ‘I accept/decline your offer’), which are essential behaviours for strategic interaction.
1511.08099#34
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
34
Pham, Vu, Bluche, Th´eodore, Kermorvant, Christopher, and Louradour, J´erˆome. Dropout improves recur- rent neural networks for handwriting recognition. In International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 285–290. IEEE, 2014. URL http://arxiv.org/pdf/1312.4569.pdf. Shi, Xingjian, Chen, Zhourong, Wang, Hao, Yeung, Dit-Yan, kin Wong, Wai, and chun Woo, Wang. Convo- lutional LSTM network: A machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing Systems, 2015. URL http://arxiv.org/abs/1506.04214. Srivastava, Rupesh Kumar, Greff, Klaus, and Schmidhuber, J¨urgen. Highway networks. CoRR, abs/1505.00387, 2015. URL http://arxiv.org/abs/1505.00387.
1511.08228#34
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
35
This paper extends the recent work above on training strategic agents using reinforcement learning, which have either used small state-action spaces or focused on navigation commands rather than negotiation dialogue. The learning agents described in this paper use a high dimensional state repre- sentation (160 non-binary features) and a fairly large action space (73 actions) for learning strategic non-cooperative dialogue behaviour. To our knowledge, our results report the highest winning rates reported to date in the game of Settlers of Catan, see [13, 16, 9]. The comprehensive evaluation reported in the previous section is evidence to argue that deep reinforcement learning is a promising framework for training strategic interactive agents. 8 # 6 Concluding Remarks
1511.08099#35
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
35
Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc VV. Sequence to sequence learning with neural net- URL works. http://arxiv.org/abs/1409.3215. In Advances in Neural Information Processing Systems, pp. 3104–3112, 2014. Toderici, George, O’Malley, Sean M., Hwang, Sung Jin, Vincent, Damien, Minnen, David, Baluja, Shumeet, Covell, Michele, and Sukthankar, Rahul. Variable rate image compression with recur- rent neural networks. URL http://arxiv.org/abs/1511.06085. Vinyals & Kaiser, Koo, Petrov, Sutskever, and Hinton. Grammar as a foreign language. In Advances in Neural Information Processing Systems, 2015. URL http://arxiv.org/abs/1412.7449. Vinyals, Oriol, Toshev, Alexander, Bengio, Samy, and Erhan, Dumitru. Show and tell: A neural image caption generator. CoRR, abs/1411.4555, 2014. URL http://arxiv.org/abs/1411.4555.
1511.08228#35
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
36
8 # 6 Concluding Remarks The contribution of this paper is the first application of Deep Reinforcement Learning (DRL) to optimising the behaviour of strategic conversational agents. Our learning agents are able to: (i) discover what trading negotiations to offer, (ii) discover when to accept, reject, or counteroffer; (iii) discover strategic behaviours based on constrained action sets—i.e. action selection from legal actions rather than from all of them; and (iv) learn highly competitive behaviour against different types of opponents. All of this is supported by a comprehensive evaluation of three DRL agents trained against three baselines (random, heuristic and supervised), which are analysed from a cross- evaluation perspective. Our experimental results report that all DRL agents substantially outperform all the baseline agents. Our results are evidence to argue that DRL is a promising framework for training the behaviour of complex strategic interactive agents. Future work can for example carry out similar evaluations as above in other strategic environments, and can also extend the abilities of the agents with other strategic features [18] and forms of learn- ing [7, 27]. In addition, a comparison of different model architectures, training parameters and reward functions can be explored in future work. Last but not least, given that our learning agents trade at the semantic level, they can be extended with language understanding/generation abilities to communicate verbally [17, 8]. # Acknowledgments
1511.08099#36
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08228
36
URL http://www.liafa.univ-paris-diderot.fr/˜yunes/ca/archives/bookvivien.pdf. Welling, Max and Teh, Yee Whye. Bayesian learning via stochastic gradient Langevin dynamics. In Proceed- ings of ICML 2011, pp. 681–688, 2011. Zaremba, Wojciech and Sutskever, Ilya. Learning to execute. CoRR, abs/1410.4615, 2015a. URL http://arxiv.org/abs/1410.4615. Zaremba, Wojciech and Sutskever, learning neural Ilya. abs/1505.00521, 2015b. URL http://arxiv.org/abs/1505.00521. Reinforcement turing machines. CoRR, 9
1511.08228#36
Neural GPUs Learn Algorithms
Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
http://arxiv.org/pdf/1511.08228
Łukasz Kaiser, Ilya Sutskever
cs.LG, cs.NE
null
null
cs.LG
20151125
20160315
[]
1511.08099
37
# Acknowledgments Funding from the European Research Council (ERC) project “STAC: Strategic Conversation” no. 269427 is gratefully acknowledged, see http://www.irit.fr/STAC/. Funding from the ESPRC, project EP/M01553X/1 “BABBLE” is gratefully acknowledged, see https://sites. google.com/site/hwinteractionlab/babble. # References [1] N. Asher and A. Lascarides. Commitments, beliefs and intentions in dialogue. In Proc. of SemDial, 2008. # FUN [2] N. Asher and A. Lascarides. Strategic conversation. Semantics and Pragmatics, 6(2):1–62, 2013. [3] L. Breiman. Random forests. Machine Learning, 45(1), 2001. [4] A. Criminisi, J. Shotton, and E. Konukoglu. Decision forests: A unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Foundations and Trends in Computer Graphics and Vision, 7(2-3), 2012.
1511.08099#37
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08099
38
[5] H. Cuay´ahuitl, S. Keizer, and O. Lemon. Learning to trade in strategic board games. In IJCAI Workshop on Computer Games (IJCAI-CGW), 2015. [6] H. Cuay´ahuitl, S. Keizer, and O. Lemon. Learning trading negotiations using manually and automatically labelled data. In International Conference on Tools with Artificial Intelligence (ICTAI), 2015. [7] H. Cuay´ahuitl, M. van Otterlo, N. Dethlefs, and L. Frommberger. Machine learning for interactive systems and robots: A brief introduction. In Proceedings of the 2nd Workshop on Machine Learning for Interactive Systems: Bridging the Gap Between Perception, Action and Communication, MLIS ’13, New York, NY, USA, 2013. ACM. [8] N. Dethlefs and H. Cuay´ahuitl. Hierarchical reinforcement learning for situated natural language genera- tion. Natural Language Engineering, 21, 5 2015. [9] M. S. Dobre and A. Lascarides. Online learning and mining human play in complex games. In IEEE Conference on Computational Intelligence and Games, CIG, 2015.
1511.08099#38
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08099
39
[10] I. Efstathiou and O. Lemon. Learning to manage risk in non-cooperative dialogues. In SemDial, 2014. [11] I. Efstathiou and O. Lemon. Learning non-cooperative dialogue behaviours. In SIGDIAL, 2014. [12] K. Georgila and D. Traum. Reinforcement learning of argumentation dialogue policies in negotiation. In Proc. of INTERSPEECH, 2011. [13] M. Guhe and A. Lascarides. Game strategies for The Settlers of Catan. In 2014 IEEE Conference on Computational Intelligence and Games, CIG, 2014. [14] T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning: data mining, inference and prediction. Springer, 2 edition, 2009. 9 [15] A. Karpathy. ConvNetJS: Javascript library for deep http://cs.stanford.edu/people/karpathy/convnetjs/, 2015. learning. [16] S. Keizer, H. Cuay´ahuitl, and O. Lemon. Learning Trade Negotiation Policies in Strategic Conversation. In Workshop on the Semantics and Pragmatics of Dialogue: goDIAL, 2015. [17] O. Lemon. Adaptive Natural Language Generation in Dialogue using Reinforcement Learning. In Proc. SEMDIAL, 2008.
1511.08099#39
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08099
40
[17] O. Lemon. Adaptive Natural Language Generation in Dialogue using Reinforcement Learning. In Proc. SEMDIAL, 2008. [18] R. Lin and S. Kraus. Can automated agents proficiently negotiate with humans? Commun. ACM, 53(1), Jan. 2010. [19] C. J. Maddison, A. Huang, I. Sutskever, and D. Silver. Move evaluation in go using deep convolutional neural networks. CoRR, abs/1412.6564, 2014. [20] M. McFarlin. 10 great board games for traders. Futures Magazine, http://www.futuresmag.com/2013/10/02/10-great-board-games-for-traders. 2013. [21] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning. In NIPS Deep Learning Workshop. 2013.
1511.08099#40
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08099
41
[22] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 02 2015. [23] V. Nair and G. E. Hinton. Rectified linear units improve restricted Boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML), pages 807–814, 2010. [24] K. Narasimhan, T. Kulkarni, and R. Barzilay. Language understanding for text-based games using deep reinforcement learning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, September 2015. [25] A. Papangelis and K. Georgila. Reinforcement Learning of Multi-Issue Negotiation Dialogue Policies. In Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGdial), 2015.
1511.08099#41
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08099
42
[26] M. Pfeiffer. Reinforcement learning of strategies for settlers of catan. In International Conference on Computer Games: Artificial Intelligence, Design and Education, 2004. [27] O. Pietquin and M. Lopez. Machine learning for interactive systems: Challenges and future trends. In Proceedings of the Workshop Affect, Compagnon Artificiel (WACAI), 2014. [28] J. Shim and R. Arkin. A Taxonomy of Robot Deception and its Benefits in HRI. In Proc. IEEE Systems, Man, and Cybernetics Conference, 2013. [29] R. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [30] C. Szepesv´ari. Algorithms for Reinforcement Learning. Morgan and Claypool Publishers, 2010. [31] I. Szita, G. Chaslot, and P. Spronck. Monte-carlo tree search in settlers of catan. In Proceedings of the 12th International Conference on Advances in Computer Games, ACG’09, 2010. [32] G. Tesauro. Temporal difference learning and TD-Gammon. Commun. ACM, 38(3), 1995.
1511.08099#42
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.08099
43
[32] G. Tesauro. Temporal difference learning and TD-Gammon. Commun. ACM, 38(3), 1995. [33] R. Thomas and K. J. Hammond. Java Settlers: a research environment for studying multi-agent negotia- tion. In Intelligent User Interfaces (IUI), pages 240–240, 2002. [34] R. S. Thomas. Real-time decision making for adversarial environments using a plan-based heuristic. PhD thesis, Northwestern University, 2003. [35] D. Traum. Extended abstract: Computational models of non-cooperative dialogue. In Proc. of SIGdial Workshop on Discourse and Dialogue, 2008. 10
1511.08099#43
Strategic Dialogue Management via Deep Reinforcement Learning
Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan---where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (`bots'), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
http://arxiv.org/pdf/1511.08099
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
cs.AI, cs.LG
NIPS'15 Workshop on Deep Reinforcement Learning
null
cs.AI
20151125
20151125
[]
1511.07289
1
We introduce the “exponential linear unit” (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like recti- fied linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PRe- LUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gra- dient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore ELUs code the degree of pres- ence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only
1511.07289#1
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
2
code the degree of pres- ence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly bet- ter generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
1511.07289#2
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
3
# INTRODUCTION Currently the most popular activation function for neural networks is the rectified linear unit (ReLU), which was first proposed for restricted Boltzmann machines (Nair & Hinton, 2010) and then suc- cessfully used for neural networks (Glorot et al., 2011). The ReLU activation function is the identity for positive arguments and zero otherwise. Besides producing sparse codes, the main advantage of ReLUs is that they alleviate the vanishing gradient problem (Hochreiter, 1998; Hochreiter et al., 2001) since the derivative of 1 for positive values is not contractive (Glorot et al., 2011). However ReLUs are non-negative and, therefore, have a mean activation larger than zero. Units that have a non-zero mean activation act as bias for the next layer. If such units do not cancel each other out, learning causes a bias shift for units in next layer. The more the units are correlated, the higher their bias shift. We will see that Fisher optimal learning, i.e., the natural gradient (Amari, 1998), would correct for the bias shift by adjusting the weight updates. Thus, less bias shift brings the standard gradient closer to the natural gradient and speeds up learning. We aim at activation functions that push activation means closer to zero to decrease the bias shift effect.
1511.07289#3
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
4
Centering the activations at zero has been proposed in order to keep the off-diagonal entries of the Fisher information matrix small (Raiko et al., 2012). For neural network it is known that centering 1 Published as a conference paper at ICLR 2016 the activations speeds up learning (LeCun et al., 1991; 1998; Schraudolph, 1998). “Batch normaliza- tion” also centers activations with the goal to counter the internal covariate shift (Ioffe & Szegedy, 2015). Also the Projected Natural Gradient Descent algorithm (PRONG) centers the activations by implicitly whitening them (Desjardins et al., 2015).
1511.07289#4
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
5
An alternative to centering is to push the mean activation toward zero by an appropriate activation function. Therefore tanh has been preferred over logistic functions (LeCun et al., 1991; 1998). Re- cently “Leaky ReLUs” (LReLUs) that replace the negative part of the ReLU with a linear function have been shown to be superior to ReLUs (Maas et al., 2013). Parametric Rectified Linear Units (PReLUs) generalize LReLUs by learning the slope of the negative part which yielded improved learning behavior on large image benchmark data sets (He et al., 2015). Another variant are Ran- domized Leaky Rectified Linear Units (RReLUs) which randomly sample the slope of the negative part which raised the performance on image benchmark datasets and convolutional networks (Xu et al., 2015).
1511.07289#5
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
6
In contrast to ReLUs, activation functions like LReLUs, PReLUs, and RReLUs do not ensure a noise-robust deactivation state. We propose an activation function that has negative values to allow for mean activations close to zero, but which saturates to a negative value with smaller arguments. The saturation decreases the variation of the units if deactivated, so the precise deactivation argument is less relevant. Such an activation function can code the degree of presence of particular phenomena in the input, but does not quantitatively model the degree of their absence. Therefore, such an acti- vation function is more robust to noise. Consequently, dependencies between coding units are much easier to model and much easier to interpret since only activated code units carry much information. Furthermore, distinct concepts are much less likely to interfere with such activation functions since the deactivation state is non-informative, i.e. variance decreasing. # 2 BIAS SHIFT CORRECTION SPEEDS UP LEARNING
1511.07289#6
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
7
# 2 BIAS SHIFT CORRECTION SPEEDS UP LEARNING To derive and analyze the bias shift effect mentioned in the introduction, we utilize the natural gra- dient. The natural gradient corrects the gradient direction with the inverse Fisher information matrix and, thereby, enables Fisher optimal learning, which ensures the steepest descent in the Riemannian parameter manifold and Fisher efficiency for online learning (Amari, 1998). The recently introduced Hessian-Free Optimization technique (Martens, 2010) and the Krylov Subspace Descent methods (Vinyals & Povey, 2012) use an extended Gauss-Newton approximation of the Hessian, therefore they can be interpreted as versions of natural gradient descent (Pascanu & Bengio, 2014).
1511.07289#7
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
8
Since for neural networks the Fisher information matrix is typically too expensive to compute, dif- ferent approximations of the natural gradient have been proposed. Topmoumoute Online natural Gradient Algorithm (TONGA) (LeRoux et al., 2008) uses a low-rank approximation of natural gra- dient descent. FActorized Natural Gradient (FANG) (Grosse & Salakhudinov, 2015) estimates the natural gradient via an approximation of the Fisher information matrix by a Gaussian graphical model. The Fisher information matrix can be approximated by a block-diagonal matrix, where unit or quasi-diagonal natural gradients are used (Olivier, 2013). Unit natural gradients or “Unitwise Fisher’s scoring” (Kurita, 1993) are based on natural gradients for perceptrons (Amari, 1998; Yang & Amari, 1998). We will base our analysis on the unit natural gradient.
1511.07289#8
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
9
We assume a parameterized probabilistic model p(a; w) with parameter vector w and data x. The training data are X = (a, ay) € ROOYDXN with an = (22, yn)? € RO, where z,, is the input for example n and y,, is its label. L(p(.;w), a) is the loss of example x = (z7,y)7 using model p(.;w). The average loss on the training data X is the empirical risk Remp(p(.; w), X). Gradient descent updates the weight vector w by w?*® = w°!4 —7VyRemp where 77 is the learning rate. The natural gradient is the inverse Fisher information matrix Fo multiplied by the gradient of the empirical risk: V22* Remp =F —1VwRemp- For a multi-layer perceptron a is the unit activation vector and ag = 1 is the bias unit activation. We consider the ingoing weights to unit 7, therefore we drop the index i: w; = w;; for the weight from unit j to unit 7, a = a; for the activation, and wo for the bias weight of unit 7. The activation function f maps the net input net = 3 w a, Of unit ¢ to its activation a = f
1511.07289#9
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
11
2 Published as a conference paper at ICLR 2016 which can be computed via backpropagation, but using the log-output probability instead of the conventional loss function. The derivative is ∂ ∂wj # vcs We restrict the Fisher information matrix to weights leading to unit i which is the unit Fisher in- formation matrix F . F captures only the interactions of weights to unit i. Consequently, the unit natural gradient only corrects the interactions of weights to unit i, i.e. considers the Riemannian parameter manifold only in a subspace. The unit Fisher information matrix is Olnp(z;w) Ilnp(z; w) [F(w)],.; E,(2:w) ( Ow, Ow; Oe Oe ak a;) . qd) Weighting the activations by δ2 is equivalent to adjusting the probability of drawing inputs z. In- puts z with large δ2 are drawn with higher probability. Since 0 ≤ δ2 = δ2(z), we can define a distribution q(z): -1 ole) = 2) ple) (fe) ole) az) = (2) n=) BLP), Q) Using q(z), the entries of F can be expressed as second moments:
1511.07289#11
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
12
Using q(z), the entries of F can be expressed as second moments: [F (w)]kj = Ep(z)(δ2 ak aj) = δ2 ak aj p(z) dz = Ep(z)(δ2) Eq(z)(ak aj) . (3) If the bias unit is ag = 1 with weight wo then the weight vector can be divided into a bias part wo and the rest w: (w’, wo)”. For the row b = [F'(w)], that corresponds to the bias weight, we have: b = Eyzy)(6'@) = Epcz)(5°) Eqz)(a@) = Covpiz)(6",a) + Epczy(a) Eyzy(5°). A) The next Theorem[]] gives the correction of the standard raion by the unit natural gradient where the bias weight is treated separately (see also[Yang & Amari|(1998)). Theorem 1. The unit natural gradient corrects the weight ‘pian (Aw, Awo)” to a unit i by following affine transformation of the gradient V (wt jwo)t Remp = (g7,90)": Aw\ (Aq (g — Aw b) # Aw\ Awo)
1511.07289#12
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
13
# Aw\ Awo) Aw\ —_ (Aq (g — Aw b) 5 Awo) = \s (go — bv Ang) ) (5) where A = [F (w)]¬0,¬0 = Ep(z)(δ2)Eq(z)(aaT ) is the unit Fisher information matrix without row 0 and column 0 corresponding to the bias weight. The vector b = [F (w)]0 is the zeroth column of F corresponding to the bias weight, and the positive scalar s is = By) (0) (1 + EX. (a) Var,dy(a) Bye)(@)) 6) where a is the vector of activations of units with weights to unit i and q(z) = δ2(z)p(z)E−1 p(z)(δ2). Proof. Multiplying the inverse in) matrix F-! with the separated Vow orl’ X)= ww , 90)" gives the weight update (Aw? , Awo)?: ov (Atgt+ustulg + # gradient — (Atgt+ustulg + gou 7 aun c a ~ ulg + 890 ; where
1511.07289#13
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
14
# gradient — (Atgt+ustulg + gou 7 aun c a ~ ulg + 890 ; where b = [F(w)ly . ¢ = [F(w)l » w= —sA tb , s = (e-b"A'b)”. (8) The previous formula is derived in Lemmaff]in the appendix. Using Aw in the update gives Aw\ (Av'g + s-tuAwo Aw\ (Aq (g — Aw b) Aw\ _ (Av'g + s-tuAwo Aw\ —_ (Aq (g — Aw b) (9) Awo) ~ ulg + 5 go » \Awo) ~ \s (go — bf Avtg)) ° (9) The right hand side is obtained by ee u = —sA~'!b in the left hand side update. Since ¢ = Foo = E,z) (6°), b = E,z)(6°)Eq(z) (a), and A = E,z)(5°)E,(z)(aa”), we obtain +
1511.07289#14
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
16
3 Published as a conference paper at ICLR 2016 The bias shift (mean shift) of unit i is the change of unit i’s mean value due to the weight update. Bias shifts of unit i lead to oscillations and impede learning. See Section 4.4 in LeCun et al. (1998) for demonstrating this effect at the inputs and in LeCun et al. (1991) for explaining this effect using the input covariance matrix. Such bias shifts are mitigated or even prevented by the unit natural gradient. The bias shift correction of the unit natural gradient is the effect on the bias shift due to b which captures the interaction between the bias unit and the incoming units. Without bias shift correction, i.e., b = 0 and s = c−1, the weight updates are ∆w = A−1g and ∆w0 = c−1g0. As only the activations depend on the input, the bias shift can be computed by multiplying the weight update by the mean of the activation vector a. Thus we obtain the bias shift (Ep(z)(a)T , 1)(∆wT , ∆w0)T = ET p(z)(a)A−1g + c−1g0. The bias shift strongly depends on the correlation of the incoming units which is captured by A−1.
1511.07289#16
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
17
Next, Theorem 2 states that the bias shift correction by the unit natural gradient can be considered to correct the incoming mean Ep(z)(a) proportional to Eq(z)(a) toward zero. Theorem 2. The bias shift correction by the unit natural gradient is equivalent to an additive cor- rection of the incoming mean by −k Eq(z)(a) and a multiplicative correction of the bias unit by k, where k = 1+ (Eye)(a) — Eye(a))! Varydy(a) Eye(a) ay) Proof. Using ∆w0 = −sbT A−1g + sg0, the bias shift is: E,z)(a)" (Aw) _ (Eyz(a)" (A-'g — An!b Awy (12) 1 Awo 1 Awo = Exe) (@ a)A*g + (a - EI )(a ) A) Awo P(2) = Gan )- (a ~ EX (a) Ab) *) A'gts (1 - EX (a a) A”'b) go - a, The mean correction term, indicated by an underbrace in previous formula, is
1511.07289#17
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
19
The expression Eq. (11) for k follows from Lemma 2 in the appendix. The bias unit correction term is s p(z)(δ2)CovT In Theorem 2 we can reformulate k = 1 + E−1 q(z)(a)Eq(z)(a). Therefore k increases with the length of Eq(z)(a) for given variances and covariances. Consequently the bias shift correction through the unit natural gradient is governed by the length of Eq(z)(a). The bias shift correction is zero for Eq(z)(a) = 0 since k = 1 does not correct the bias unit multiplicatively. Using Eq. (4), Eq(z)(a) is split into an offset and an information containing term: Eq(z)(a) = Ep(z)(a) + E−1 p(z)(δ2) Covp(z)(δ2, a) . (14) In general, smaller positive Ep(z)(a) lead to smaller positive Eq(z)(a), therefore to smaller correc- tions. The reason is that in general the largest absolute components of Covp(z)(δ2, a) are positive, since activated inputs will activate the unit i which in turn will have large impact on the output.
1511.07289#19
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
20
To summarize, the unit natural gradient corrects the bias shift of unit i via the interactions of in- coming units with the bias unit to ensure efficient learning. This correction is equivalent to shifting the mean activations of the incoming units toward zero and scaling up the bias unit. To reduce the 4 Published as a conference paper at ICLR 2016 undesired bias shift effect without the natural gradient, either the (i) activation of incoming units can be centered at zero or (ii) activation functions with negative values can be used. We introduce a new activation function with negative values while keeping the identity for positive arguments where it is not contradicting. # 3 EXPONENTIAL LINEAR UNITS (ELUS) The exponential linear unit (ELU) with 0 < α is # x 1 ifx>0 ifx>0 ; x , 1 a= tr) = : 15 f() { vexple) —1 ife¢<0 ” f(x) ye) +0 if <0 (5)
1511.07289#20
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
21
The ELU hyperparameter α controls the value to which an ELU saturates for negative net inputs (see Fig. 1). ELUs diminish the vanishing gradient effect as rectified linear units (ReLUs) and leaky ReLUs (LReLUs) do. The vanishing gradient problem is alleviated because the positive part of these functions is the identity, therefore their derivative is one and not contractive. In contrast, tanh and sigmoid activation functions are contractive almost everywhere.
1511.07289#21
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
22
In contrast to ReLUs, ELUs have negative values which pushes the mean of the activations closer to zero. Mean activations that are closer to zero enable faster learning as they bring the gradient closer to the natural gradient (see Theorem 2 and text thereafter). ELUs saturate to a negative value when the argument gets smaller. Saturation means a small derivative which decreases the variation and the information that is propagated to the next layer. Therefore the representation is both noise-robust and low-complex (Hochreiter & Schmidhuber, 1999). ELUs code the degree of presence of input concepts, while they nei- ther quantify the degree of their absence nor distin- guish the causes of their absence. This property of non-informative deactivation states is also present at ReLUs and allowed to detect biclusters corresponding to biological modules in gene expression datasets (Clevert et al., 2015) and to identify toxicophores in toxicity prediction (Unterthiner et al., 2015; Mayr et al., 2015). The enabling features for these interpretations is that activation can be clearly distinguished from deactivation and that only active units carry relevant information and can crosstalk. # 4 EXPERIMENTS USING ELUS
1511.07289#22
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
23
# 4 EXPERIMENTS USING ELUS In this section, we assess the performance of exponential linear units (ELUs) if used for unsupervised and supervised learning of deep autoencoders and deep convolutional networks. ELUs with α = 1.0 are compared to (i) Rectified Linear Units (ReLUs) with activation f (x) = max(0, x), (ii) Leaky ReLUs (LReLUs) with activation f (x) = max(αx, x) (0 < α < 1), and (iii) Shifted ReLUs (SReLUs) with activation f (x) = max(−1, x). Comparisons are done with and without batch normalization. The following benchmark datasets are used: (i) MNIST (gray images in 10 classes, 60k train and 10k test), (ii) CIFAR-10 (color images in 10 classes, 50k train and 10k test), (iii) CIFAR-100 (color images in 100 classes, 50k train and 10k test), and (iv) ImageNet (color images in 1,000 classes, 1.3M train and 100k tests). # 4.1 MNIST 4.1.1 LEARNING BEHAVIOR
1511.07289#23
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
24
# 4.1 MNIST 4.1.1 LEARNING BEHAVIOR We first want to verify that ELUs keep the mean activations closer to zero than other units. Fully connected deep neural networks with ELUs (α = 1.0), ReLUs, and LReLUs (α = 0.1) were trained on the MNIST digit classification dataset while each hidden unit’s activation was tracked. Each 5 Published as a conference paper at ICLR 2016 (a) Average unit activation (b) Cross entropy loss Figure 2: ELU networks evaluated at MNIST. Lines are the average over five runs with different random initializations, error bars show standard deviation. Panel (a): median of the average unit activation for different activation functions. Panel (b): Training set (straight line) and validation set (dotted line) cross entropy loss. All lines stay flat after epoch 25.
1511.07289#24
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
25
network had eight hidden layers of 128 units each, and was trained for 300 epochs by stochastic gra- dient descent with learning rate 0.01 and mini-batches of size 64. The weights have been initialized according to (He et al., 2015). After each epoch we calculated the units’ average activations on a fixed subset of the training data. Fig. 2 shows the median over all units along learning. ELUs stay have smaller median throughout the training process. The training error of ELU networks decreases much more rapidly than for the other networks. Section C in the appendix compares the variance of median activation in ReLU and ELU networks. The median varies much more in ReLU networks. This indicates that ReLU networks continuously try to correct the bias shift introduced by previous weight updates while this effect is much less prominent in ELU networks. 4.1.2 AUTOENCODER LEARNING (a) Training set (b) Test set Figure 3: Autoencoder training on MNIST: Reconstruction error for the test and training data set over epochs, using different activation functions and learning rates. The results are medians over several runs with different random initializations. 6 Published as a conference paper at ICLR 2016
1511.07289#25
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
26
6 Published as a conference paper at ICLR 2016 To evaluate ELU networks at unsupervised settings, we followed Martens (2010) and Desjardins et al. (2015) and trained a deep autoencoder on the MNIST dataset. The encoder part consisted of four fully connected hidden layers with sizes 1000, 500, 250 and 30, respectively. The decoder part was symmetrical to the encoder. For learning we applied stochastic gradient descent with mini- batches of 64 samples for 500 epochs using the fixed learning rates (10−2, 10−3, 10−4, 10−5). Fig. 3 shows, that ELUs outperform the competing activation functions in terms of training / test set recon- struction error for all learning rates. As already noted by Desjardins et al. (2015), higher learning rates seem to perform better. 4.2 COMPARISON OF ACTIVATION FUNCTIONS In this subsection we show that ELUs indeed possess a superior learning behavior compared to other activation functions as postulated in Section 3. Furthermore we show that ELU networks perform better than ReLU networks with batch normalization. We use as benchmark dataset CIFAR-100 and use a relatively simple convolutional neural network (CNN) architecture to keep the computational complexity reasonable for comparisons.
1511.07289#26
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
27
(a) Training loss (b) Training loss (start) (c) Training loss (end) (d) Test error (e) Test error (start) (f) Test error (end) Figure 4: Comparison of ReLUs, LReLUs, and SReLUs on CIFAR-100. Panels (a-c) show the training loss, panels (d-f) the test classification error. The ribbon band show the mean and standard deviation for 10 runs along the curve. ELU networks achieved lowest test error and training loss.
1511.07289#27
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
28
The CNN for these CIFAR-100 experiments consists of 11 convolutional layers arranged in stacks of ([1 × 192 × 5], [1 × 192 × 1, 1 × 240 × 3], [1 × 240 × 1, 1 × 260 × 2], [1 × 260 × 1, 1 × 280 × 2], [1 × 280 × 1, 1 × 300 × 2], [1 × 300 × 1], [1 × 100 × 1]) layers × units × receptive fields. 2×2 max-pooling with a stride of 2 was applied after each stack. For network regularization we used the following drop-out rate for the last layer of each stack (0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.0). The L2-weight decay regularization term was set to 0.0005. The following learning rate schedule was applied (0 − 35k[0.01], 35k − 85k[0.005], 85k − 135k[0.0005], 135k − 165k[0.00005]) (iterations [learning rate]). For fair comparisons, we used this learning rate schedule for all networks. During previous experiments, this schedule was optimized for ReLU networks, however as
1511.07289#28
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
30
7 Published as a conference paper at ICLR 2016 (a) ELU - ReLU (b) ELU - SReLU (c) ELU - LReLU (d) ELU - ReLU (end) (e) ELU - SReLU (end) (f) ELU - LReLU (end) Figure 5: Pairwise comparisons of ELUs with ReLUs, SReLUs, and LReLUs with and without batch normalization (BN) on CIFAR-100. Panels are described as in Fig. 4. ELU networks outperform ReLU networks with batch normalization. normalization and ZCA whitening. Additionally, the images were padded with four zero pixels at all borders. The model was trained on 32 × 32 random crops with random horizontal flipping. Besides that, we no further augmented the dataset during training. Each network was run 10 times with different weight initialization. Across networks with different activation functions the same run number had the same initial weights.
1511.07289#30
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
31
Mean test error results of networks with different activation functions are compared in Fig. 4, which also shows the standard deviation. ELUs yield on average a test error of 28.75(±0.24)%, while SRe- LUs, ReLUs and LReLUs yield 29.35(±0.29)%, 31.56(±0.37)% and 30.59(±0.29)%, respectively. ELUs achieve both lower training loss and lower test error than ReLUs, LReLUs, and SReLUs. Both the ELU training and test performance is significantly better than for other activation func- tions (Wilcoxon signed-rank test with p-value<0.001). Batch normalization improved ReLU and LReLU networks, but did not improve ELU and SReLU networks (see Fig. 5). ELU networks significantly outperform ReLU networks with batch normalization (Wilcoxon signed-rank test with p-value<0.001). 4.3 CLASSIFICATION PERFORMANCE ON CIFAR-100 AND CIFAR-10
1511.07289#31
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
32
4.3 CLASSIFICATION PERFORMANCE ON CIFAR-100 AND CIFAR-10 The following experiments should highlight the generalization capabilities of ELU networks. The CNN architecture is more sophisticated than in the previous subsection and consists of 18 convolu- tional layers arranged in stacks of ([1 × 384 × 3], [1 × 384 × 1, 1 × 384 × 2, 2 × 640 × 2], [1 × 640 × 1, 3 × 768 × 2], [1 × 768 × 1, 2 × 896 × 2], [1 × 896 × 3, 2 × 1024 × 2], [1 × 1024 × 1, 1 × 1152 × 2], [1 × 1152 × 1], [1 × 100 × 1]). Initial drop-out rate, Max-pooling after each stack, L2-weight decay, momentum term, data preprocessing, padding, and cropping were as in previous section. The initial learning rate was set to 0.01 and decreased by a factor of 10 after 35k iterations. The mini- batch size was 100. For the final 50k iterations fine-tuning we increased the drop-out rate for all 8 Published as a conference paper at ICLR 2016
1511.07289#32
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
33
8 Published as a conference paper at ICLR 2016 layers in a stack to (0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.0), thereafter increased the drop-out rate by a factor of 1.5 for 40k additional iterations. Table 1: Comparison of ELU networks and other CNNs on CIFAR-10 and CIFAR-100. Reported is the test error in percent misclassification for ELU networks and recent convolutional architectures like AlexNet, DSN, NiN, Maxout, All-CNN, Highway Network, and Fractional Max-Pooling. Best results are in bold. ELU networks are second best for CIFAR-10 and best for CIFAR-100. Network AlexNet DSN NiN Maxout All-CNN Highway Network Fract. Max-Pooling ELU-Network CIFAR-10 (test error %) 18.04 7.97 8.81 9.38 7.25 7.60 4.50 6.55 CIFAR-100 (test error %) 45.80 34.57 35.68 38.57 33.71 32.24 27.62 24.28 √ √ √ √ √ √
1511.07289#33
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
34
ELU networks are compared to following recent successful CNN architectures: AlexNet (Krizhevsky et al., 2012), DSN (Lee et al., 2015), NiN (Lin et al., 2013), Maxout (Goodfellow et al., 2013), All-CNN (Springenberg et al., 2014), Highway Network (Srivastava et al., 2015) and Fractional Max-Pooling (Graham, 2014). The test error in percent misclassification are given in Tab. 1. ELU-networks are the second best on CIFAR-10 with a test error of 6.55% but still they are among the top 10 best results reported for CIFAR-10. ELU networks performed best on CIFAR-100 with a test error of 24.28%. This is the best published result on CIFAR-100, without even resorting to multi-view evaluation or model averaging. IMAGENET CHALLENGE DATASET
1511.07289#34
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
35
IMAGENET CHALLENGE DATASET Finally, we evaluated ELU-networks on the 1000-class ImageNet dataset. It contains about 1.3M training color images as well as additional 50k images and 100k images for validation and testing, respectively. For this task, we designed a 15 layer CNN, which was arranged in stacks of (1 × 96 × 6, 3 × 512 × 3, 5 × 768 × 3, 3 × 1024 × 3, 2 × 4096 × F C, 1 × 1000 × F C) layers × units × receptive fields or fully-connected (FC). 2×2 max-pooling with a stride of 2 was applied after each stack and spatial pyramid pooling (SPP) with 3 levels before the first FC layer (He et al., 2015). For network regularization we set the L2-weight decay term to 0.0005 and used 50% drop-out in the two penultimate FC layers. Images were re-sized to 256×256 pixels and per-pixel mean subtracted. Trained was on 224 × 224 random crops with random horizontal flipping. Besides that, we did not augment the dataset during training.
1511.07289#35
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]
1511.07289
36
Fig. 6 shows the learning behavior of ELU vs. ReLU networks. Panel (b) shows that ELUs start reducing the error earlier. The ELU-network already reaches the 20% top-5 error after 160k itera- tions, while the ReLU network needs 200k iterations to reach the same error rate. The single-model performance was evaluated on the single center crop with no further augmentation and yielded a top-5 validation error below 10%. Currently ELU nets are 5% slower on ImageNet than ReLU nets. The difference is small because activation functions generally have only minor influence on the overall training time (Jia, 2014). In terms of wall clock time, ELUs require 12.15h vs. ReLUs with 11.48h for 10k iterations. We ex- pect that ELU implementations can be improved, e.g. by faster exponential functions (Schraudolph, 1999). # 5 CONCLUSION We have introduced the exponential linear units (ELUs) for faster and more precise learning in deep neural networks. ELUs have negative values, which allows the network to push the mean activations 9 Published as a conference paper at ICLR 2016 (a) Training loss (b) Top-5 test error (c) Top-1 test error
1511.07289#36
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network.
http://arxiv.org/pdf/1511.07289
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
cs.LG
Published as a conference paper at ICLR 2016
null
cs.LG
20151123
20160222
[]