doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1607.07086 | 55 | # A HYPERPARAMETERS
For machine translation experiments the variance penalty coefï¬cient λ was set to 10â4, and the delay coefï¬cients γθ and Î³Ï were both set to 10â4. For REINFORCE with the critic we did not use a delayed actor, i.e. γθ was set to 1. For the spelling correction task we used the same γθ and Î³Ï but a different λ = 10â3. When we used a combined training criterion, the weight of the log-likelihood gradient λLL was always 0.1. All initial weights were sampled from a centered uniform distribution with width 0.1.
In some of our experiments we provided the actor states as additional inputs to the critic. Speciï¬cally, we did so in our spelling correction experiments and in our WMT 14 machine translation study. All the other results were obtained without this technique.
For decoding with beam search we substracted the length of a candidate times Ï from the log- likelihood cost. The exact value of Ï was selected on the validation set and was equal to 0.8 for models trained by log-likelihood and REINFORCE and to 1.0 for models trained by actor-critic and REINFORCE-critic. | 1607.07086#55 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 56 | For some of the hyperparameters we performed an ablation study. The results are reported in Table 5.
# B DATA
For the IWSLT 2014 data the sizes of validation and tests set were 6,969 and 6,750, respectively. We limited the number of words in the English and German vocabularies to the 22,822 and 32,009 most frequent words, respectively, and replaced all other words with a special token. The maximum sentence length in our dataset was 50. For WMT14 we used vocabularies of 30,000 words for both English and French, and the maximum sentence length was also 50.
15
Published as a conference paper at ICLR 2017
# C GENERATED Q-VALUES
In Table C we provide an example of value predictions that the critic outputs for candidate next words. One can see that the critic has indeed learnt to assign larger values for the appropriate next words. While the critic does not always produce sensible estimates and can often predict a high return for irrelevant rare words, this is greatly reduced using the variance penalty term from Equation (10). | 1607.07086#56 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 58 | Words with largest ËQ and(6.623) there(6.200) but(5.967) that(6.197) one(5.668) 's(5.467) that(5.408) one(5.118) i(5.002) that(4.796) i(4.629) ,(4.139) want(5.008) i(4.160) 't(3.361) to(4.729) want(3.497) going(3.396) talk(3.717) you(2.407) to(2.133) about(1.209) that(0.989) talk(0.924) about(0.706) .(0.660) right(0.653) .(0.498) ?(0.291) â(0.285) .(0.195) there(0.175) know(0.087) .(0.168) â
(-0.093) ?(-0.173)
16
Published as a conference paper at ICLR 2017
# D PROOF OF EQUATION (7) | 1607.07086#58 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.07086 | 59 | 16
Published as a conference paper at ICLR 2017
# D PROOF OF EQUATION (7)
ave d , W = yey, RY) = ar [p(1)P(Gal) -.-PGr|ti ---Grâ1)| RW) = STP pn) POD (5 oh JRO) = de t=1 y oS T HAY 1..t- ~ De PF e-1) PO y(Figacr iF) ros Fie) = 14. Your T=1 dp(Gr\Â¥1.t-1) 4-1) (Yj SY wv yy eee lye T rie; Y1..t-1) + Ss P(Yia1.71M1...t) Ss r+ (Gr3Vi.r-1) Yi Tv r=t4+1 SS E > PAM) OG: % 44) = fay V1. 1-1 ~P(Â¥s...2-1) 2A do E yy lel) wl) Q(a Y1..1-1) Yrp(Â¥) t=1aeA
# T
t=1
17 | 1607.07086#59 | An Actor-Critic Algorithm for Sequence Prediction | We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | http://arxiv.org/pdf/1607.07086 | Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio | cs.LG | null | null | cs.LG | 20160724 | 20170303 | [
{
"id": "1512.02433"
},
{
"id": "1506.00619"
},
{
"id": "1508.01211"
},
{
"id": "1511.06732"
},
{
"id": "1509.02971"
},
{
"id": "1509.00685"
},
{
"id": "1609.08144"
},
{
"id": "1506.03099"
},
{
"id": "1511.07275"
},
{
"id": "1606.02960"
}
] |
1607.06450 | 1 | # Abstract
Training state-of-the-art, deep neural networks is computationally expensive. One way to reduce the training time is to normalize the activities of the neurons. A recently introduced technique called batch normalization uses the distribution of the summed input to a neuron over a mini-batch of training cases to compute a mean and variance which are then used to normalize the summed input to that neuron on each training case. This signiï¬cantly reduces the training time in feed- forward neural networks. However, the effect of batch normalization is dependent on the mini-batch size and it is not obvious how to apply it to recurrent neural net- works. In this paper, we transpose batch normalization into layer normalization by computing the mean and variance used for normalization from all of the summed inputs to the neurons in a layer on a single training case. Like batch normalization, we also give each neuron its own adaptive bias and gain which are applied after the normalization but before the non-linearity. Unlike batch normalization, layer normalization performs exactly the same computation at training and test times. It is also straightforward to apply to recurrent neural networks by computing the normalization statistics separately at each time step. Layer normalization is very effective at stabilizing the hidden state dynamics in recurrent networks. Empiri- cally, we show that layer normalization can substantially reduce the training time compared with previously published techniques.
# 1 Introduction | 1607.06450#1 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 2 | # 1 Introduction
Deep neural networks trained with some version of Stochastic Gradient Descent have been shown to substantially outperform previous approaches on various supervised learning tasks in computer vision [Krizhevsky et al., 2012] and speech processing [Hinton et al., 2012]. But state-of-the-art deep neural networks often require many days of training. It is possible to speed-up the learning by computing gradients for different subsets of the training cases on different machines or splitting the neural network itself over many machines [Dean et al., 2012], but this can require a lot of com- munication and complex software. It also tends to lead to rapidly diminishing returns as the degree of parallelization increases. An orthogonal approach is to modify the computations performed in the forward pass of the neural net to make learning easier. Recently, batch normalization [Ioffe and Szegedy, 2015] has been proposed to reduce training time by including additional normalization stages in deep neural networks. The normalization standardizes each summed input using its mean and its standard deviation across the training data. Feedforward neural networks trained using batch normalization converge faster even with simple SGD. In addition to training time improvement, the stochasticity from the batch statistics serves as a regularizer during training. | 1607.06450#2 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 3 | Despite its simplicity, batch normalization requires running averages of the summed input statis- tics. In feed-forward networks with ï¬xed depth, it is straightforward to store the statistics separately for each hidden layer. However, the summed inputs to the recurrent neurons in a recurrent neu- ral network (RNN) often vary with the length of the sequence so applying batch normalization to RNNs appears to require different statistics for different time-steps. Furthermore, batch normalization cannot be applied to online learning tasks or to extremely large distributed models where the minibatches have to be small.
This paper introduces layer normalization, a simple normalization method to improve the training speed for various neural network models. Unlike batch normalization, the proposed method directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. We show that layer normalization works well for RNNs and improves both the training time and the generalization performance of several existing RNN models.
# 2 Background | 1607.06450#3 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 4 | # 2 Background
A feed-forward neural network is a non-linear mapping from a input pattern x to an output vector y. Consider the lth hidden layer in a deep feed-forward, neural network, and let al be the vector representation of the summed inputs to the neurons in that layer. The summed inputs are computed through a linear projection with the weight matrix W l and the bottom-up inputs hl given as follows:
# T
(1) i is the incoming weights to the ith hidden i is the scalar bias parameter. The parameters in the neural network are learnt using
where f (·) is an element-wise non-linear function and wl units and bl gradient-based optimization algorithms with the gradients being computed by back-propagation.
One of the challenges of deep learning is that the gradients with respect to the weights in one layer are highly dependent on the outputs of the neurons in the previous layer especially if these outputs change in a highly correlated way. Batch normalization [Ioffe and Szegedy, 2015] was proposed to reduce such undesirable âcovariate shiftâ. The method normalizes the summed inputs to each hidden unit over the training cases. Speciï¬cally, for the ith summed input in the lth layer, the batch normalization method rescales the summed inputs according to their variances under the distribution of the data | 1607.06450#4 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 5 | 1_ 9 l l l l 2 a= Se (aâm) w= Bla] ot VB [lat âa)"] 2)
i is normalized summed inputs to the ith hidden unit in the lth layer and gi is a gain parame- where ¯al ter scaling the normalized activation before the non-linear activation function. Note the expectation is under the whole training data distribution. It is typically impractical to compute the expectations in Eq. (2) exactly, since it would require forward passes through the whole training dataset with the current set of weights. Instead, µ and Ï are estimated using the empirical samples from the current mini-batch. This puts constraints on the size of a mini-batch and it is hard to apply to recurrent neural networks.
# 3 Layer normalization
We now consider the layer normalization method which is designed to overcome the drawbacks of batch normalization.
Notice that changes in the output of one layer will tend to cause highly correlated changes in the summed inputs to the next layer, especially with ReLU units whose outputs can change by a lot. This suggests the âcovariate shiftâ problem can be reduced by ï¬xing the mean and the variance of the summed inputs within each layer. We, thus, compute the layer normalization statistics over all the hidden units in the same layer as follows:
H 1 1 l l w=aydoa, a=, A (3) | 1607.06450#5 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 6 | H 1 1 l l w=aydoa, a=, A (3)
where H denotes the number of hidden units in a layer. The difference between Eq. (2) and Eq. (3) is that under layer normalization, all the hidden units in a layer share the same normalization terms µ and Ï, but different training cases have different normalization terms. Unlike batch normalization, layer normaliztion does not impose any constraint on the size of a mini-batch and it can be used in the pure online regime with batch size 1.
2
# 3.1 Layer normalized recurrent neural networks
The recent sequence to sequence models [Sutskever et al., 2014] utilize compact recurrent neural networks to solve sequential prediction problems in natural language processing. It is common among the NLP tasks to have different sentence lengths for different training cases. This is easy to deal with in an RNN because the same weights are used at every time-step. But when we apply batch normalization to an RNN in the obvious way, we need to to compute and store separate statistics for each time step in a sequence. This is problematic if a test sequence is longer than any of the training sequences. Layer normalization does not have such problem because its normalization terms depend only on the summed inputs to a layer at the current time-step. It also has only one set of gain and bias parameters shared over all time-steps. | 1607.06450#6 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 7 | In a standard RNN, the summed inputs in the recurrent layer are computed from the current input xt and previous vector of hidden states htâ1 which are computed as at = Whhhtâ1 + Wxhxt. The layer normalized recurrent layer re-centers and re-scales its activations using the extra normalization terms similar to Eq. (3):
hâ f(2 © (aâ =n") 4 ot
where W),, is the recurrent hidden to hidden weights and W,,;, are the bottom up input to hidden weights. © is the element-wise multiplication between two vectors. b and g are defined as the bias and gain parameters of the same dimension as hâ.
In a standard RNN, there is a tendency for the average magnitude of the summed inputs to the recur- rent units to either grow or shrink at every time-step, leading to exploding or vanishing gradients. In a layer normalized RNN, the normalization terms make it invariant to re-scaling all of the summed inputs to a layer, which results in much more stable hidden-to-hidden dynamics.
# 4 Related work | 1607.06450#7 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 8 | # 4 Related work
Batch normalization has been previously extended to recurrent neural networks [Laurent et al., 2015, Amodei et al., 2015, Cooijmans et al., 2016]. The previous work [Cooijmans et al., 2016] suggests the best performance of recurrent batch normalization is obtained by keeping independent normal- ization statistics for each time-step. The authors show that initializing the gain parameter in the recurrent batch normalization layer to 0.1 makes signiï¬cant difference in the ï¬nal performance of the model. Our work is also related to weight normalization [Salimans and Kingma, 2016]. In weight normalization, instead of the variance, the L2 norm of the incoming weights is used to normalize the summed inputs to a neuron. Applying either weight normalization or batch normal- ization using expected statistics is equivalent to have a different parameterization of the original feed-forward neural network. Re-parameterization in the ReLU network was studied in the Path- normalized SGD [Neyshabur et al., 2015]. Our proposed layer normalization method, however, is not a re-parameterization of the original neural network. The layer normalized model, thus, has different invariance properties than the other methods, that we will study in the following section.
# 5 Analysis
In this section, we investigate the invariance properties of different normalization schemes. | 1607.06450#8 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 9 | # 5 Analysis
In this section, we investigate the invariance properties of different normalization schemes.
# Invariance under weights and data transformations
The proposed layer normalization is related to batch normalization and weight normalization. Al- though, their normalization scalars are computed differently, these methods can be summarized as normalizing the summed inputs ai to a neuron through the two scalars µ and Ï. They also learn an adaptive bias b and gain g for each neuron after the normalization.
hi = f ( gi Ïi (ai â µi) + bi) (5)
Note that for layer normalization and batch normalization, 1 and o is computed according to Eq. and[3| In weight normalization, pz is 0, and o = ||wl2.
3
Weight matrix Weight matrix Weight vector re-centering re-scaling re-scaling Dataset re-scaling Dataset re-centering Single training case re-scaling Batch norm Weight norm Layer norm Invariant Invariant Invariant No No Invariant Invariant No Invariant Invariant No No No No Invariant
# Invariant Invariant No Table 1: Invariance properties under the normalization methods.
Table 1 highlights the following invariance results for three normalization methods. | 1607.06450#9 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 10 | # Invariant Invariant No Table 1: Invariance properties under the normalization methods.
Table 1 highlights the following invariance results for three normalization methods.
Weight re-scaling and re-centering: First, observe that under batch normalization and weight normalization, any re-scaling to the incoming weights w; of a single neuron has no effect on the normalized summed inputs to a neuron. To be precise, under batch and weight normalization, if the weight vector is scaled by 6, the two scalar jz and o will also be scaled by 6. The normalized summed inputs stays the same before and after scaling. So the batch and weight normalization are invariant to the re-scaling of the weights. Layer normalization, on the other hand, is not invariant to the individual scaling of the single weight vectors. Instead, layer normalization is invariant to scaling of the entire weight matrix and invariant to a shift to all of the incoming weights in the weight matrix. Let there be two sets of model parameters 0, 6â whose weight matrices W and Wâ differ by a scaling factor 6 and all of the incoming weights in Wâ are also shifted by a constant vector Â¥, that is Wâ = 6W + 1+'. Under layer normalization, the two models effectively compute the same output: | 1607.06450#10 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 11 | Sy ( hâ =f(5 (W'xâ 1â) +b) =f (S ((6W +1y ")x â nwâ) +b) =e (Wx âp) +b) =h. 6)
Notice that if normalization is only applied to the input before the weights, the model will not be invariant to re-scaling and re-centering of the weights.
Data re-scaling and re-centering: We can show that all the normalization methods are invariant to re-scaling the dataset by verifying that the summed inputs of neurons stays constant under the changes. Furthermore, layer normalization is invariant to re-scaling of individual training cases, because the normalization scalars jz and o in Eq. (3) only depend on the current input data. Let xâ be a new data point obtained by re-scaling x by 6. Then we have,
2 (wi xâ âp') +b) = Gi Ty _ 50 (dw; x â 5p) + b:) = hi. (7) | 1607.06450#11 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 12 | 2 (wi xâ âp') +b) = Gi Ty _ 50 (dw; x â 5p) + b:) = hi. (7)
It is easy to see re-scaling individual data points does not change the modelâs prediction under layer normalization. Similar to the re-centering of the weight matrix in layer normalization, we can also show that batch normalization is invariant to re-centering of the dataset.
# 5.2 Geometry of parameter space during learning
We have investigated the invariance of the modelâs prediction under re-centering and re-scaling of the parameters. Learning, however, can behave very differently under different parameterizations, even though the models express the same underlying function. In this section, we analyze learning behavior through the geometry and the manifold of the parameter space. We show that the normal- ization scalar Ï can implicitly reduce learning rate and makes learning more stable.
# 5.2.1 Riemannian metric
The learnable parameters in a statistical model form a smooth manifold that consists of all possible input-output relations of the model. For models whose output is a probability distribution, a natural way to measure the separation of two points on this manifold is the Kullback-Leibler divergence between their model output distributions. Under the KL divergence metric, the parameter space is a Riemannian manifold. | 1607.06450#12 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 13 | The curvature of a Riemannian manifold is entirely captured by its Riemannian metric, whose quadratic form is denoted as ds2. That is the inï¬nitesimal distance in the tangent space at a point in the parameter space. Intuitively, it measures the changes in the model output from the parameter space along a tangent direction. The Riemannian metric under KL was previously studied [Amari, 1998] and was shown to be well approximated under second order Taylor expansion using the Fisher
4
information matrix:
ds? = Dru [Ply |x: Ply |x: 6+ 8)] © 557 F(O)6, (8)
F (θ) = E xâ¼P (x),yâ¼P (y | x) â log P (y | x; θ) âθ â log P (y | x; θ) âθ , (9)
where, δ is a small change to the parameters. The Riemannian metric above presents a geometric view of parameter spaces. The following analysis of the Riemannian metric provides some insight into how normalization methods could help in training neural networks.
# 5.2.2 The geometry of normalized generalized linear models | 1607.06450#13 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 14 | # 5.2.2 The geometry of normalized generalized linear models
We focus our geometric analysis on the generalized linear model. The results from the following analysis can be easily applied to understand deep neural networks with block-diagonal approxima- tion to the Fisher information matrix, where each block corresponds to the parameters for a single neuron.
A generalized linear model (GLM) can be regarded as parameterizing an output distribution from the exponential family using a weight vector w and bias scalar b. To be consistent with the previous sections, the log likelihood of the GLM can be written using the summed inputs a as the following:
log P (y | x; w, b) = (a + b)y â η(a + b) Ï + c(y, Ï), (10)
Ely |x] = f(a +6) = f(w'x +5), Varly|x] = of'(a +d), (1) | 1607.06450#14 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 15 | Ely |x] = f(a +6) = f(w'x +5), Varly|x] = of'(a +d), (1)
where, f(-) is the transfer function that is the analog of the non-linearity in neural networks, fâ(-) is the derivative of the transfer function, 7(-) is a real valued function and c(-) is the log parti- tion function. ¢ is a constant that scales the output variance. Assume a H-dimensional output vector y = [y1,Y2,°"* , YH] is modeled using H independent GLMs and log P(y|x; W,b) = yt log P(y: |x; wi, bi). Let W be the weight matrix whose rows are the weight vectors of the individual GLMs, b denote the bias vector of length H and vec(-) denote the Kronecker vector op- erator. The Fisher information matrix for the multi-dimensional GLM with respect to its parameters 6 = [w] ,b1,--- ,wiy,bH]' = vec([W, b]") is simply the expected Kronecker product of the data features and the output covariance matrix:
+ Covly | x] @ ie 7] ; (12) F(0) = | 1607.06450#15 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 16 | + Covly | x] @ ie 7] ; (12) F(0) =
We obtain normalized GLMs by applying the normalization methods to the summed inputs a in the original model through 4: and o. Without loss of generality, we denote Fâ as the Fisher infor- mation matrix under the normalized multi-dimensional GLM with the additional gain parameters 6 = vec([W, b, g]"):
Fi: Fin Covk Ix LH yiyT we x eH) _ - ov[yi, yy |X âu FO=]): 5 2 |, By EO a xi 3 1 ante x~P(x Fur --> Fur x} wecnl ca ects te) (13) On, a â [yy OO; â i . 14 NES * Ow; Oo; Ow; (4) | 1607.06450#16 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 17 | Implicit learning rate reduction through the growth of the weight vector: Notice that, com- paring to standard GLM, the block ¯Fij along the weight vector wi direction is scaled by the gain parameters and the normalization scalar Ïi. If the norm of the weight vector wi grows twice as large, even though the modelâs output remains the same, the Fisher information matrix will be different. The curvature along the wi direction will change by a factor of 1 2 because the Ïi will also be twice as large. As a result, for the same parameter update in the normalized model, the norm of the weight vector effectively controls the learning rate for the weight vector. During learning, it is harder to change the orientation of the weight vector with large norm. The normalization methods, therefore,
5
(a) Recall@1 (b) Recall@5 (c) Recall@10
Figure 1: Recall@K curves using order-embeddings with and without layer normalization. | 1607.06450#17 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 18 | Figure 1: Recall@K curves using order-embeddings with and without layer normalization.
MSCOCO Caption Retrieval Image Retrieval Model Sym [Vendrov et al., 2016] OE [Vendrov et al., 2016] OE (ours) OE + LN R@1 R@5 R@10 Mean r R@1 R@5 R@10 Mean r 45.4 46.7 46.6 48.5 88.7 88.9 89.1 89.8 5.8 5.7 5.2 5.1 36.3 37.9 37.8 38.9 85.8 85.9 85.7 86.3 79.3 80.6 73.6 74.3 9.0 8.1 7.9 7.6
Table 2: Average results across 5 test splits for caption and image retrieval. R@K is Recall@K (high is good). Mean r is the mean rank (low is good). Sym corresponds to the symmetric baseline while OE indicates order-embeddings.
have an implicit âearly stoppingâ effect on the weight vectors and help to stabilize learning towards convergence. | 1607.06450#18 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 19 | have an implicit âearly stoppingâ effect on the weight vectors and help to stabilize learning towards convergence.
Learning the magnitude of incoming weights: In normalized models, the magnitude of the incom- ing weights is explicitly parameterized by the gain parameters. We compare how the model output changes between updating the gain parameters in the normalized GLM and updating the magnitude of the equivalent weights under original parameterization during learning. The direction along the gain parameters in ¯F captures the geometry for the magnitude of the incoming weights. We show that Riemannian metric along the magnitude of the incoming weights for the standard GLM is scaled by the norm of its input, whereas learning the gain parameters for the batch normalized and layer normalized models depends only on the magnitude of the prediction error. Learning the magnitude of incoming weights in the normalized model is therefore, more robust to the scaling of the input and its parameters than in the standard model. See Appendix for detailed derivations.
# 6 Experimental results
We perform experiments with layer normalization on 6 tasks, with a focus on recurrent neural net- works: image-sentence ranking, question-answering, contextual language modelling, generative modelling, handwriting sequence generation and MNIST classiï¬cation. Unless otherwise noted, the default initialization of layer normalization is to set the adaptive gains to 1 and the biases to 0 in the experiments.
# 6.1 Order embeddings of images and language | 1607.06450#19 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 20 | # 6.1 Order embeddings of images and language
In this experiment, we apply layer normalization to the recently proposed order-embeddings model of Vendrov et al. [2016] for learning a joint embedding space of images and sentences. We follow the same experimental protocol as Vendrov et al. [2016] and modify their publicly available code to incorporate layer normalization 1 which utilizes Theano [Team et al., 2016]. Images and sen- tences from the Microsoft COCO dataset [Lin et al., 2014] are embedded into a common vector space, where a GRU [Cho et al., 2014] is used to encode sentences and the outputs of a pre-trained VGG ConvNet [Simonyan and Zisserman, 2015] (10-crop) are used to encode images. The order- embedding model represents images and sentences as a 2-level partial ordering and replaces the cosine similarity scoring function used in Kiros et al. [2014] with an asymmetric one.
# 1https://github.com/ivendrov/order-embedding
6
Attentive reader â_LSTM â BN-LSTM â _ BN-everywhere LN-LSTM ° ea S a validation error rate ° Nu 2° uu Ss & 100 200 300 400 500 600 700 800 training steps (thousands) | 1607.06450#20 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 21 | Figure 2: Validation curves for the attentive reader model. BN results are taken from [Cooijmans et al., 2016].
We trained two models: the baseline order-embedding model as well as the same model with layer normalization applied to the GRU. After every 300 iterations, we compute Recall@K (R@K) values on a held out validation set and save the model whenever R@K improves. The best performing models are then evaluated on 5 separate test sets, each containing 1000 images and 5000 captions, for which the mean results are reported. Both models use Adam [Kingma and Ba, 2014] with the same initial hyperparameters and both models are trained using the same architectural choices as used in Vendrov et al. [2016]. We refer the reader to the appendix for a description of how layer normalization is applied to GRU. | 1607.06450#21 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 22 | Figure 1 illustrates the validation curves of the models, with and without layer normalization. We plot R@1, R@5 and R@10 for the image retrieval task. We observe that layer normalization offers a per-iteration speedup across all metrics and converges to its best validation model in 60% of the time it takes the baseline model to do so. In Table 2, the test set results are reported from which we observe that layer normalization also results in improved generalization over the original model. The results we report are state-of-the-art for RNN embedding models, with only the structure-preserving model of Wang et al. [2016] reporting better results on this task. However, they evaluate under different conditions (1 test set instead of the mean over 5) and are thus not directly comparable.
# 6.2 Teaching machines to read and comprehend | 1607.06450#22 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 23 | # 6.2 Teaching machines to read and comprehend
In order to compare layer normalization to the recently proposed recurrent batch normalization [Cooijmans et al., 2016], we train an unidirectional attentive reader model on the CNN corpus both introduced by Hermann et al. [2015]. This is a question-answering task where a query description about a passage must be answered by ï¬lling in a blank. The data is anonymized such that entities are given randomized tokens to prevent degenerate solutions, which are consistently permuted dur- ing training and evaluation. We follow the same experimental protocol as Cooijmans et al. [2016] and modify their public code to incorporate layer normalization 2 which uses Theano [Team et al., 2016]. We obtained the pre-processed dataset used by Cooijmans et al. [2016] which differs from the original experiments of Hermann et al. [2015] in that each passage is limited to 4 sentences. In Cooijmans et al. [2016], two variants of recurrent batch normalization are used: one where BN is only applied to the LSTM while the other applies BN everywhere throughout the model. In our experiment, we only apply layer normalization within the LSTM. | 1607.06450#23 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 24 | The results of this experiment are shown in Figure 2. We observe that layer normalization not only trains faster but converges to a better validation result over both the baseline and BN variants. In Cooijmans et al. [2016], it is argued that the scale parameter in BN must be carefully chosen and is set to 0.1 in their experiments. We experimented with layer normalization for both 1.0 and 0.1 scale initialization and found that the former model performed signiï¬cantly better. This demonstrates that layer normalization is not sensitive to the initial scale in the same way that recurrent BN is. 3
# 6.3 Skip-thought vectors
Skip-thoughts [Kiros et al., 2015] is a generalization of the skip-gram model [Mikolov et al., 2013] for learning unsupervised distributed sentence representations. Given contiguous text, a sentence is
2https://github.com/cooijmanstim/Attentive_reader/tree/bn 3We only produce results on the validation set, as in the case of Cooijmans et al. [2016]
7
(a) SICK(r) (b) SICK(MSE) (c) MR (d) CR (e) SUBJ (f) MPQA | 1607.06450#24 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 25 | Figure 3: Performance of skip-thought vectors with and without layer normalization on downstream tasks as a function of training iterations. The original lines are the reported results in [Kiros et al., 2015]. Plots with error use 10-fold cross validation. Best seen in color.
Method SICK(r) SICK(Ï) SICK(MSE) MR CR SUBJ MPQA Original [Kiros et al., 2015] 0.848 0.778 0.287 75.5 79.3 92.1 86.9 Ours Ours + LN Ours + LN â 0.842 0.854 0.858 0.767 0.785 0.788 0.298 0.277 0.270 77.3 79.5 79.4 81.8 82.6 83.1 92.6 93.4 93.7 87.9 89.0 89.3
Table 3: Skip-thoughts results. The ï¬rst two evaluation columns indicate Pearson and Spearman cor- relation, the third is mean squared error and the remaining indicate classiï¬cation accuracy. Higher is better for all evaluations except MSE. Our models were trained for 1M iterations with the exception of (â ) which was trained for 1 month (approximately 1.7M iterations) | 1607.06450#25 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 27 | In this experiment we determine to what effect layer normalization can speed up training. Using the publicly available code of Kiros et al. [2015] 4, we train two models on the BookCorpus dataset [Zhu et al., 2015]: one with and one without layer normalization. These experiments are performed with Theano [Team et al., 2016]. We adhere to the experimental setup used in Kiros et al. [2015], training a 2400-dimensional sentence encoder with the same hyperparameters. Given the size of the states used, it is conceivable layer normalization would produce slower per-iteration updates than without. However, we found that provided CNMeM 5 is used, there was no signiï¬cant difference between the two models. We checkpoint both models after every 50,000 iterations and evaluate their performance on ï¬ve tasks: semantic-relatedness (SICK) [Marelli et al., 2014], movie review sentiment (MR) [Pang and Lee, 2005], customer product reviews (CR) [Hu and Liu, 2004], subjectivity/objectivity classiï¬cation (SUBJ) [Pang and Lee, 2004] and opinion polarity (MPQA) [Wiebe et al., 2005]. We plot the performance of both models for each checkpoint on all tasks to determine whether the performance rate can be improved with LN. | 1607.06450#27 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 28 | The experimental results are illustrated in Figure 3. We observe that applying layer normalization results both in speedup over the baseline as well as better ï¬nal results after 1M iterations are per- formed as shown in Table 3. We also let the model with layer normalization train for a total of a month, resulting in further performance gains across all but one task. We note that the performance
4https://github.com/ryankiros/skip-thoughts 5https://github.com/NVIDIA/cnmem
8
0 + T T -100} Nae â Baseline test -200 ~ Baseline train â300 â LN test â400 LN train â500/ â600} -700/ â800} -900 1 Negative Log Likelihood : : 0° 107 10? 10? Updates x 200
Figure 5: Handwriting sequence generation model negative log likelihood with and without layer normalization. The models are trained with mini-batch size of 8 and sequence length of 500,
differences between the original reported results and ours are likely due to the fact that the publicly available code does not condition at each timestep of the decoder, where the original model does.
# 6.4 Modeling binarized MNIST using DRAW
100 Baseline â WN 95 : dt on 80,5 ae B00 Epoch 90 85 Test Variational Bound | 1607.06450#28 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 29 | # 6.4 Modeling binarized MNIST using DRAW
100 Baseline â WN 95 : dt on 80,5 ae B00 Epoch 90 85 Test Variational Bound
We also experimented with the generative modeling on the MNIST dataset. Deep Recurrent Attention Writer (DRAW) [Gregor et al., 2015] has previously achieved the state-of-the- art performance on modeling the distribution of MNIST dig- its. The model uses a differential attention mechanism and a recurrent neural network to sequentially generate pieces of an image. We evaluate the effect of layer normalization on a DRAW model using 64 glimpses and 256 LSTM hidden units. The model is trained with the default setting of Adam [Kingma and Ba, 2014] optimizer and the minibatch size of 128. Previous publications on binarized MNIST have used various training protocols to generate their datasets. In this experiment, we used the ï¬xed binarization from Larochelle and Murray [2011]. The dataset has been split into 50,000 training, 10,000 validation and 10,000 test images.
Figure 4: DRAW model test nega- tive log likelihood with and without layer normalization. | 1607.06450#29 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 30 | Figure 4: DRAW model test nega- tive log likelihood with and without layer normalization.
Figure 4 shows the test variational bound for the ï¬rst 100 epoch. It highlights the speedup beneï¬t of applying layer nor- malization that the layer normalized DRAW converges almost twice as fast than the baseline model. After 200 epoches, the baseline model converges to a variational log likelihood of 82.36 nats on the test data and the layer normalization model obtains 82.09 nats.
# 6.5 Handwriting sequence generation
The previous experiments mostly examine RNNs on NLP tasks whose lengths are in the range of 10 to 40. To show the effectiveness of layer normalization on longer sequences, we performed hand- writing generation tasks using the IAM Online Handwriting Database [Liwicki and Bunke, 2005]. IAM-OnDB consists of handwritten lines collected from 221 different writers. When given the input character string, the goal is to predict a sequence of x and y pen co-ordinates of the corresponding handwriting line on the whiteboard. There are, in total, 12179 handwriting line sequences. The input string is typically more than 25 characters and the average handwriting line has a length around 700. | 1607.06450#30 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 31 | We used the same model architecture as in Section (5.2) of Graves [2013]. The model architecture consists of three hidden layers of 400 LSTM cells, which produce 20 bivariate Gaussian mixture components at the output layer, and a size 3 input layer. The character sequence was encoded with one-hot vectors, and hence the window vectors were size 57. A mixture of 10 Gaussian functions was used for the window parameters, requiring a size 30 parameter vector. The total number of weights was increased to approximately 3.7M. The model is trained using mini-batches of size 8 and the Adam [Kingma and Ba, 2014] optimizer.
The combination of small mini-batch size and very long sequences makes it important to have very stable hidden dynamics. Figure 5 shows that layer normalization converges to a comparable log likelihood as the baseline model but is much faster.
9 | 1607.06450#31 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 32 | 9
10? re; ] Train NLL â BatchNorm bz128 10° â layerNorm bz4 | Baseline bz128 Baseline bz4 â LayerNorm bz128 v0 â _ BatchNorm bz4|1 0 i0 Ea) 30 a0 50 60 Test Er. oo10 âLayerNorm bz4 | | â Baseline bz4 â LayerNorm bz128 â BatchNorm bz4 0.005 0.005, 0 10 2 30 a0 30 60 0 5 20 30 40 30 Ca Epoch Epoch
Figure 6: Permutation invariant MNIST 784-1000-1000-10 model negative log likelihood and test error with layer normalization and batch normalization. (Left) The models are trained with batch- size of 128. (Right) The models are trained with batch-size of 4.
# 6.6 Permutation invariant MNIST | 1607.06450#32 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 33 | # 6.6 Permutation invariant MNIST
In addition to RNNs, we investigated layer normalization in feed-forward networks. We show how layer normalization compares with batch normalization on the well-studied permutation invariant MNIST classiï¬cation problem. From the previous analysis, layer normalization is invariant to input re-scaling which is desirable for the internal hidden layers. But this is unnecessary for the logit outputs where the prediction conï¬dence is determined by the scale of the logits. We only apply layer normalization to the fully-connected hidden layers that excludes the last softmax layer.
All the models were trained using 55000 training data points and the Adam [Kingma and Ba, 2014] optimizer. For the smaller batch-size, the variance term for batch normalization is computed using the unbiased estimator. The experimental results from Figure 6 highlight that layer normalization is robust to the batch-sizes and exhibits a faster training convergence comparing to batch normalization that is applied to all layers.
# 6.7 Convolutional Networks | 1607.06450#33 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 34 | # 6.7 Convolutional Networks
We have also experimented with convolutional neural networks. In our preliminary experiments, we observed that layer normalization offers a speedup over the baseline model without normalization, but batch normalization outperforms the other methods. With fully connected layers, all the hidden units in a layer tend to make similar contributions to the ï¬nal prediction and re-centering and re- scaling the summed inputs to a layer works well. However, the assumption of similar contributions is no longer true for convolutional neural networks. The large number of the hidden units whose receptive ï¬elds lie near the boundary of the image are rarely turned on and thus have very different statistics from the rest of the hidden units within the same layer. We think further research is needed to make layer normalization work well in ConvNets.
# 7 Conclusion
In this paper, we introduced layer normalization to speed-up the training of neural networks. We provided a theoretical analysis that compared the invariance properties of layer normalization with batch normalization and weight normalization. We showed that layer normalization is invariant to per training-case feature shifting and scaling.
Empirically, we showed that recurrent neural networks beneï¬t the most from the proposed method especially for long sequences and small mini-batches.
# Acknowledgments
This research was funded by grants from NSERC, CFI, and Google.
10
# References | 1607.06450#34 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 35 | # Acknowledgments
This research was funded by grants from NSERC, CFI, and Google.
10
# References
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In NIPS, 2012.
Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE, 2012.
Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In NIPS, 2012.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML, 2015.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. Advances in neural information processing systems, pages 3104â3112, 2014. In | 1607.06450#35 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 36 | C´esar Laurent, Gabriel Pereyra, Phil´emon Brakel, Ying Zhang, and Yoshua Bengio. Batch normalized recurrent neural networks. arXiv preprint arXiv:1510.01378, 2015.
Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015.
Tim Cooijmans, Nicolas Ballas, C´esar Laurent, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016.
Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate train- ing of deep neural networks. arXiv preprint arXiv:1602.07868, 2016.
Behnam Neyshabur, Ruslan R Salakhutdinov, and Nati Srebro. Path-sgd: Path-normalized optimization in deep neural networks. In Advances in Neural Information Processing Systems, pages 2413â2421, 2015. | 1607.06450#36 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 37 | Shun-Ichi Amari. Natural gradient works efï¬ciently in learning. Neural computation, 1998.
Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language. ICLR, 2016.
The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Fr´ed´eric Bastien, Justin Bayer, Anatoly Belikov, et al. Theano: A python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688, 2016.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. ECCV, 2014.
Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. EMNLP, 2014. | 1607.06450#37 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 38 | Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR, 2015.
Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with multi- modal neural language models. arXiv preprint arXiv:1411.2539, 2014.
D. Kingma and J. L. Ba. Adam: a method for stochastic optimization. ICLR, 2014. arXiv:1412.6980.
Liwei Wang, Yin Li, and Svetlana Lazebnik. Learning deep structure-preserving image-text embeddings. CVPR, 2016.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In NIPS, 2015.
Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Skip-thought vectors. In NIPS, 2015. | 1607.06450#38 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 39 | Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efï¬cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV, 2015.
Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. SemEval-2014, 2014.
11
Bo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL, pages 115â124, 2005.
Minqing Hu and Bing Liu. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, 2004.
Bo Pang and Lillian Lee. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In ACL, 2004. | 1607.06450#39 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 40 | Bo Pang and Lillian Lee. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In ACL, 2004.
Janyce Wiebe, Theresa Wilson, and Claire Cardie. Annotating expressions of opinions and emotions in lan- guage. Language resources and evaluation, 2005.
K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. DRAW: a recurrent neural network for image generation. arXiv:1502.04623, 2015.
Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS, volume 6, page 622, 2011.
Marcus Liwicki and Horst Bunke. Iam-ondb-an on-line english sentence database acquired from handwritten text on a whiteboard. In ICDAR, 2005.
Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
12
# Supplementary Material
# Application of layer normalization to each experiment
This section describes how layer normalization is applied to each of the papersâ experiments. For notation convenience, we deï¬ne layer normalization as a function mapping LN : RD â RD with two set of adaptive parameters, gains α and biases β:
LN(z:0, 3) = (@ â ) SatB, (15) | 1607.06450#40 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 41 | LN(z:0, 3) = (@ â ) SatB, (15)
12 wpe o=, (16)
where, zi is the ith element of the vector z.
# Teaching machines to read and comprehend and handwriting sequence generation
The basic LSTM equations used for these experiment are given by:
ft it ot gt
= Whhtâ1 + Wxxt + b (17)
ce, = o(f) © câ1 + o(iz) © tanh(g,) (18)
ce, = o(f) © câ1 + o(iz) © tanh(g,) hy = o(0;) © tanh(c;)
hy = o(0;) © tanh(c;) (19)
The version that incorporates layer normalization is modiï¬ed as follows:
ft it ot gt
= LN (Whhtâ1; α1, β1) + LN (Wxxt; α2, β2) + b (20)
c, = o(f;) © cy-1 + o(is) © tanh(gy) hy = o(0;) © tanh(LN(c;; a3, 83))
c, = o(f;) © cy-1 + o(is) © tanh(gy) (21) | 1607.06450#41 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 42 | c, = o(f;) © cy-1 + o(is) © tanh(gy) (21)
hy = o(0;) © tanh(LN(c;; a3, 83)) (22)
where αi, βi are the additive and multiplicative parameters, respectively. Each αi is initialized to a vector of zeros and each βi is initialized to a vector of ones.
# Order embeddings and skip-thoughts
These experiments utilize a variant of gated recurrent unit which is deï¬ned as follows:
(") ry h, = tanh(Wx, + o(r) © (Uby_-1)) h, = (1âo(z:))he-1 + o(z:)hy Why_1 + Wax:
= Whhtâ1 + Wxxt (23)
(24)
(25)
Layer normalization is applied as follows:
â â¢~ GS Ne ll LIN(W)ky-1; a1, 81) + LN(W2X1; 2, 32) h, = tanh(LN (Wx;; a3, 33) + o(r.) © LN(Uhy_1; a4, 81)) hy, = (1âo(2:))hy-1 + o(z1) hy | 1607.06450#42 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 43 | just as before, αi is initialized to a vector of zeros and each βi is initialized to a vector of ones.
13
(18) (19)
(21) (22)
(26)
(27)
(28)
# Modeling binarized MNIST using DRAW
The layer norm is only applied to the output of the LSTM hidden states in this experiment:
The version that incorporates layer normalization is modiï¬ed as follows:
ft it ot gt
= Whhtâ1 + Wxxt + b (29)
ce, = o(f) © câ1 + o(iz) © tanh(g,) (30) h, = o(o,) © tanh(LN(c;; 0, 8) Gl) where a, 3 are the additive and multiplicative parameters, respectively. @ is initialized to a vector of zeros and (3 is initialized to a vector of ones.
# Learning the magnitude of incoming weights | 1607.06450#43 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 44 | # Learning the magnitude of incoming weights
We now compare how gradient descent updates changing magnitude of the equivalent weights be- tween the normalized GLM and original parameterization. The magnitude of the weights are ex- plicitly parameterized using the gain parameter in the normalized model. Assume there is a gradient update that changes norm of the weight vectors by δg. We can project the gradient updates to the weight vector for the normal GLM. The KL metric, ie how much the gradient update changes the model prediction, for the normalized model depends only on the magnitude of the prediction error. Speciï¬cally,
under batch normalization:
1 = 1 Cc ds? = 3 ver([0, 0, 5g] ")" F(vec([W, b, g]") vec([0,0, d4]") = 550 ew [ees] 5g: (32)
Under layer normalization: | 1607.06450#44 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 45 | Under layer normalization:
1 _ ds? =5 vec((0,0,6,)") F(wee((W, b,]") vee((0,0,64)") Cov(yr, y1 |x) OS =p" es Cov(yn, yar |x) SE MGnâ 1 1 =, E : me, : 5g BO xP) (au =1)(a1 =p) (an =n)? Cov(yr, 41 |X) oe Cov(yn, ya |x) #5
Under weight normalization:
1 _ dsâ =5 vec([0, 0, 5j]') "F(vec([W, b, g] ") vec([0, 0, 5g] ") Lal Cov(ys, I) ro Cov(n Yu 1%) Tttattea Te =5'â : . : 6g. G4 29 6? xxP(x) , , , OD) 2 Cov(yi, ys |X) 7 Cov (yi, yt |) rate won feleo|= | 1607.06450#45 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.06450 | 46 | Cov(yi, ys |X) 7 Cov (yi, yt |) rate won feleo|= Whereas, the KL metric in the standard GLM is related to its activities a; = w; x, that is depended on both its current weights and input data. We Project the gradient updates to the gain parameter 6,1 of the iââ neuron to its weight vector as Jy; + â Twrls in the standard GLM model:
tol") F (lw) , bi, wf ,by]") vec 0,54; 2,0] 5 vee ([8gi 951 Ne? |) (wi! ,w; ,d;]) vee([5gi oi ,0)') Tok ne Ilwill2 Toile ne I[evy|2 bgi5g5 aja; = E |Cov(y;, y; |x) â+~2_ (35) BE aay [CVU Fa falhoyle
The batch normalized and layer normalized models are therefore more robust to the scaling of the input and its parameters than the standard model.
14
(32)
(33) | 1607.06450#46 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive.
One way to reduce the training time is to normalize the activities of the
neurons. A recently introduced technique called batch normalization uses the
distribution of the summed input to a neuron over a mini-batch of training
cases to compute a mean and variance which are then used to normalize the
summed input to that neuron on each training case. This significantly reduces
the training time in feed-forward neural networks. However, the effect of batch
normalization is dependent on the mini-batch size and it is not obvious how to
apply it to recurrent neural networks. In this paper, we transpose batch
normalization into layer normalization by computing the mean and variance used
for normalization from all of the summed inputs to the neurons in a layer on a
single training case. Like batch normalization, we also give each neuron its
own adaptive bias and gain which are applied after the normalization but before
the non-linearity. Unlike batch normalization, layer normalization performs
exactly the same computation at training and test times. It is also
straightforward to apply to recurrent neural networks by computing the
normalization statistics separately at each time step. Layer normalization is
very effective at stabilizing the hidden state dynamics in recurrent networks.
Empirically, we show that layer normalization can substantially reduce the
training time compared with previously published techniques. | http://arxiv.org/pdf/1607.06450 | Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton | stat.ML, cs.LG | null | null | stat.ML | 20160721 | 20160721 | [
{
"id": "1605.02688"
},
{
"id": "1502.04623"
},
{
"id": "1603.09025"
},
{
"id": "1602.07868"
},
{
"id": "1510.01378"
},
{
"id": "1512.02595"
}
] |
1607.01759 | 0 | 2016
6 1 0 2
g u A 9
arXiv:1607.01759v3 [esCL]
# ] L C . s c [
3 v 9 5 7 1 0 . 7 0 6 1 : v i X r a
# Bag of Tricks for Efï¬cient Text Classiï¬cation
# Armand Joulin Edouard Grave Piotr Bojanowski Tomas Mikolov
Facebook AI Research {ajoulin,egrave,bojanowski,tmikolov}@fb.com
# Abstract
This paper explores a simple and efï¬cient Our ex- baseline for text classiï¬cation. periments show that our text classi- fast ï¬er fastText is often on par with deep learning classiï¬ers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute. | 1607.01759#0 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 1 | In this work, we explore ways to scale these baselines to very large corpus with a large output space, in the context of text classiï¬cation. Inspired by the recent work in efï¬cient word representation learning (Mikolov et al., 2013; Levy et al., 2015), we show that linear models with a rank constraint and a fast loss approximation can train on a billion words within ten minutes, while achieving perfor- mance on par with the state-of-the-art. We evalu- ate the quality of our approach fastText1 on two different tasks, namely tag prediction and sentiment analysis.
# 1 Introduction
# 2 Model architecture
Text classiï¬cation is an important task in Natural Language Processing with many applications, such as web search, information retrieval, ranking and classiï¬cation (Deerwester et al., 1990; document Pang and Lee, 2008). Recently, models based on neural networks have become increasingly popular Zhang and LeCun, 2015; Conneau et al., 2016). While these models achieve very good performance in practice, they tend to be relatively slow both at train and test time, limiting their use on very large datasets. | 1607.01759#1 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 2 | of- are text for ten considered as (Joachims, 1998; problems classiï¬cation Fan et al., 2008). McCallum and Nigam, 1998; Despite their simplicity, they often obtain state- of-the-art performances if the right features are used (Wang and Manning, 2012). They also have the potential to scale to very large cor- pus (Agarwal et al., 2014).
A simple and efï¬cient baseline for sentence classiï¬cation is to represent sentences as bag of words (BoW) and train a linear classiï¬er, e.g., a logistic regression or an SVM (Joachims, 1998; Fan et al., 2008). However, linear classiï¬ers do not share parameters among features and classes. This possibly limits their generalization in the context of large output space where some classes have very few examples. Common solutions to this problem are to factorize the linear clas- (Schutze, 1992; siï¬er Mikolov et al., 2013) use multilayer neural (Collobert and Weston, 2008; networks Zhang et al., 2015). | 1607.01759#2 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 3 | Figure 1 shows a simple linear model with rank constraint. The ï¬rst weight matrix A is a look-up table over the words. The word representations are then averaged into a text representation, which is in turn fed to a linear classiï¬er. The text representa
# 1https://github.com/facebookresearch/fastText
output hidden x1 x2 . . . xN â1 xN
Figure 1: Model architecture of fastText for a sentence with N ngram features x1, . . . , xN . The features are embedded and averaged to form the hidden variable.
tion is an hidden variable which can be potentially be reused. This architecture is similar to the cbow model of Mikolov et al. (2013), where the middle word is replaced by a label. We use the softmax function f to compute the probability distribution over the predeï¬ned classes. For a set of N doc- uments, this leads to minimizing the negative log- likelihood over the classes:
â 1 N N X n=1 yn log(f (BAxn)), | 1607.01759#3 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 4 | where xn is the normalized bag of features of the n- th document, yn the label, A and B the weight matri- ces. This model is trained asynchronously on mul- tiple CPUs using stochastic gradient descent and a linearly decaying learning rate.
# 2.1 Hierarchical softmax
When the number of classes is large, computing the linear classiï¬er is computationally expensive. More precisely, the computational complexity is O(kh) where k is the number of classes and h the di- mension of the text representation. In order to im- prove our running time, we use a hierarchical soft- max (Goodman, 2001) based on the Huffman cod- ing tree (Mikolov et al., 2013). During training, the computational complexity drops to O(h log2(k)).
The hierarchical softmax is also advantageous at test time when searching for the most likely class. Each node is associated with a probability that is the probability of the path from the root to that node. If the node is at depth l + 1 with parents n1, . . . , nl, its probability is
l P (nl+1) = Y i=1 P (ni). | 1607.01759#4 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 5 | l P (nl+1) = Y i=1 P (ni).
This means that the probability of a node is always lower than the one of its parent. Exploring the tree with a depth ï¬rst search and tracking the maximum probability among the leaves allows us to discard any branch associated with a small probability. In practice, we observe a reduction of the complexity to O(h log2(k)) at test time. This approach is fur- ther extended to compute the T -top targets at the cost of O(log(T )), using a binary heap.
# 2.2 N-gram features
Bag of words is invariant to word order but taking explicitly this order into account is often computa- tionally very expensive. Instead, we use a bag of n-grams as additional features to capture some par- tial information about the local word order. This is very efï¬cient in practice while achieving compa- rable results to methods that explicitly use the or- der (Wang and Manning, 2012).
We maintain a fast and memory efï¬cient mapping of the n-grams by using the hashing trick (Weinberger et al., 2009) with the same hash- ing function as in Mikolov et al. (2011) and 10M bins if we only used bigrams, and 100M otherwise.
# 3 Experiments | 1607.01759#5 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 6 | # 3 Experiments
We evaluate fastText on two different tasks. First, we compare it to existing text classifers on the problem of sentiment analysis. Then, we evaluate its capacity to scale to large output space on a tag prediction dataset. Note that our model could be im- plemented with the Vowpal Wabbit library,2 but we observe in practice, that our tailored implementation is at least 2-5Ã faster.
# 3.1 Sentiment analysis
the Datasets protocol same 8 the n-grams of Zhang et al. (2015). We report from Zhang et al. (2015), and TFIDF baselines level convolutional as well as model (char-CNN) of Zhang and LeCun (2015), the character based convolution recurrent net- work (char-CRNN) of (Xiao and Cho, 2016) and the very deep convolutional network (VDCNN) We also compare of Conneau et al. (2016). | 1607.01759#6 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 8 | Model AG Sogou DBP Yelp P. Yelp F. Yah. A. Amz. F. Amz. P. BoW (Zhang et al., 2015) ngrams (Zhang et al., 2015) ngrams TFIDF (Zhang et al., 2015) char-CNN (Zhang and LeCun, 2015) char-CRNN (Xiao and Cho, 2016) VDCNN (Conneau et al., 2016) 88.8 92.0 92.4 87.2 91.4 91.3 92.9 97.1 97.2 95.1 95.2 96.8 96.6 98.6 98.7 98.3 98.6 98.7 92.2 95.6 95.4 94.7 94.5 95.7 58.0 56.3 54.8 62.0 61.8 64.7 68.9 68.5 68.5 71.2 71.7 73.4 54.6 54.3 52.4 59.5 59.2 63.0 90.4 92.0 91.5 94.5 94.1 95.7 fastText, h = 10 fastText, h = 10, bigram 91.5 92.5 93.9 96.8 98.1 98.6 93.8 95.7 60.4 63.9 72.0 72.3 55.8 | 1607.01759#8 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 10 | Table 1: Test accuracy [%] on sentiment datasets. FastText has been run with the same parameters for all the datasets. It has 10 hidden units and we evaluate it with and without bigrams. For char-CNN, we show the best reported numbers without data augmentation.
Zhang and LeCun (2015) Conneau et al. (2016) fastText small char-CNN big char-CNN depth=9 depth=17 depth=29 AG Sogou DBpedia Yelp P. Yelp F. Yah. A. Amz. F. Amz. P. 1h - 2h - - 8h 2d 2d 3h - 5h - - 1d 5d 5d 24m 25m 27m 28m 29m 1h 2h45 2h45 37m 41m 44m 43m 45m 1h33 4h20 4h25 51m 56m 1h 1h09 1h12 2h 7h 7h 1s 7s 2s 3s 4s 5s 9s 10s | 1607.01759#10 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 11 | Table 2: Training time for a single epoch on sentiment analysis datasets compared to char-CNN and VDCNN.
following their evaluation to Tang et al. (2015) protocol. We report their main baselines as well as their two approaches based on recurrent networks (Conv-GRNN and LSTM-GRNN).
Results. We present the results in Figure 1. We use 10 hidden units and run fastText for 5 epochs with a learning rate selected on a valida- tion set from {0.05, 0.1, 0.25, 0.5}. On this task, adding bigram information improves the perfor- mance by 1-4%. Overall our accuracy is slightly better than char-CNN and char-CRNN and, a bit worse than VDCNN. Note that we can increase the accuracy slightly by using more n-grams, for example with trigrams, the performance on Sogou goes up to 97.1%. Finally, Figure 3 shows that our method is competitive with the methods pre- sented in Tang et al. (2015). We tune the hyper- parameters on the validation set and observe that using n-grams up to 5 leads to the best perfor- mance. Unlike Tang et al. (2015), fastText does not use pre-trained word embeddings, which can be explained the 1% difference in accuracy. | 1607.01759#11 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 13 | Training time. Both char-CNN and VDCNN are trained on a NVIDIA Tesla K40 GPU, while our models are trained on a CPU using 20 threads. Ta- ble 2 shows that methods using convolutions are sev- eral orders of magnitude slower than fastText. While it is possible to have a 10Ã speed up for char-CNN by using more recent CUDA implemen- tations of convolutions, fastText takes less than a minute to train on these datasets. The GRNNs method of Tang et al. (2015) takes around 12 hours per epoch on CPU with a single thread. Our speedInput Prediction Tags taiyoucon 2011 digitals: individuals digital pho- tos from the anime convention taiyoucon 2011 in mesa, arizona. if you know the model and/or the character, please comment. #cosplay #24mm #anime #animeconvention #arizona #canon #con #convention #cos #cosplay #costume #mesa #play #taiyou #taiyoucon 2012 twin cities pride 2012 twin cities pride pa- rade #minneapolis #2012twincitiesprideparade neapolis #mn #usa #min- beagle enjoys the snowfall #snow #2007 #beagle #hillsboro | 1607.01759#13 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 15 | Table 4: Examples from the validation set of YFCC100M dataset obtained with fastText with 200 hidden units and bigrams. We show a few correct and incorrect tag predictions.
up compared to neural network based methods in- creases with the size of the dataset, going up to at least a 15,000Ã speed-up.
# 3.2 Tag prediction
Dataset and baselines. To test scalability of our approach, further evaluation is carried on (Thomee et al., 2016) the YFCC100M dataset which consists of almost 100M images with cap- tions, titles and tags. We focus on predicting the tags according to the title and caption (we do not use the images). We remove the words and tags occurring less than 100 times and split the data into a train, validation and test set. The train set contains 91,188,648 examples (1.5B tokens). The validation has 930,497 examples and the test set 543,424. The vocabulary size is 297,141 and there are 312,116 unique tags. We will release a script that recreates this dataset so that our numbers could be reproduced. We report precision at 1. | 1607.01759#15 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 16 | We consider a frequency-based baseline which tag. We also com- predicts the most frequent pare with Tagspace (Weston et al., 2014), which is a tag prediction model similar to ours, but based on the Wsabie model of Weston et al. (2011). While the Tagspace model is described using convolutions, we consider the linear version, which achieves com- parable performance but is much faster.
Model prec@1 Running time Train Test Freq. baseline Tagspace, h = 50 Tagspace, h = 200 2.2 30.1 35.6 - 3h8 5h32 - 6h 15h fastText, h = 50 31.2 fastText, h = 50, bigram 36.7 fastText, h = 200 41.1 fastText, h = 200, bigram 46.1 6m40 7m47 10m34 13m38 48s 50s 1m29 1m37
Table 5: Prec@1 on the test set for tag prediction on YFCC100M. We also report the training time and test time. Test time is reported for a single thread, while training uses 20 threads for both models. | 1607.01759#16 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 17 | and 200. Both models achieve a similar perfor- mance with a small hidden layer, but adding bi- grams gives us a signiï¬cant boost in accuracy. At test time, Tagspace needs to compute the scores for all the classes which makes it relatively slow, while our fast inference gives a signiï¬cant speed-up when the number of classes is large (more than 300K here). Overall, we are more than an order of mag- nitude faster to obtain model with a better quality. The speedup of the test phase is even more signiï¬- cant (a 600à speedup). Table 4 shows some quali- tative examples.
Results and training time. Table 5 presents a comparison of fastText and the baselines. We run fastText for 5 epochs and compare it to Tagspace for two sizes of the hidden layer, i.e., 50
# 4 Discussion and conclusion
In this work, we propose a simple baseline method for text classiï¬cation. Unlike unsupervisedly trained word vectors from word2vec, our word features can | 1607.01759#17 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 18 | In this work, we propose a simple baseline method for text classiï¬cation. Unlike unsupervisedly trained word vectors from word2vec, our word features can
be averaged together to form good sentence repre- sentations. In several tasks, fastText obtains per- formance on par with recently proposed methods in- spired by deep learning, while being much faster. Although deep neural networks have in theory much higher representational power than shallow models, it is not clear if simple text classiï¬cation problems such as sentiment analysis are the right ones to eval- uate them. We will publish our code so that the research community can easily build on top of our work.
Acknowledgement. We thank Gabriel Synnaeve, Herv´e G´egou, Jason Weston and L´eon Bottou for their help and comments. We also thank Alexis Con- neau, Duyu Tang and Zichao Zhang for providing us with information about their methods.
# References | 1607.01759#18 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 19 | # References
[Agarwal et al.2014] Alekh Agarwal, Olivier Chapelle, Miroslav Dud´ık, and John Langford. 2014. A reliable effective terascale linear learning system. JMLR. [Collobert and Weston2008] Ronan Collobert and Jason Weston. 2008. A uniï¬ed architecture for natural lan- guage processing: Deep neural networks with multi- task learning. In ICML.
[Conneau et al.2016] Alexis Conneau, Holger Schwenk, Lo¨ıc Barrault, and Yann Lecun. 2016. Very deep con- volutional networks for natural language processing. arXiv preprint arXiv:1606.01781.
[Deerwester et al.1990] Scott Deerwester, Susan T Du- mais, George W Furnas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American society for informa- tion science.
[Fan et al.2008] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. Li- blinear: A library for large linear classiï¬cation. JMLR. [Goodman2001] Joshua Goodman. 2001. Classes for fast | 1607.01759#19 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 20 | maximum entropy training. In ICASSP.
[Joachims1998] Thorsten Joachims. 1998. Text catego- rization with support vector machines: Learning with many relevant features. Springer.
[Kim2014] Yoon Kim. 2014. Convolutional neural net- works for sentence classiï¬cation. In EMNLP.
[Levy et al.2015] Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. TACL.
[McCallum and Nigam1998] Andrew McCallum and Ka- mal Nigam. 1998. A comparison of event models for
naive bayes text classiï¬cation. In AAAI workshop on learning for text categorization.
[Mikolov et al.2011] Tom´aËs Mikolov, Anoop Deoras, Daniel Povey, Luk´aËs Burget, and Jan ËCernock`y. 2011. Strategies for training large scale neural network lan- guage models. In Workshop on Automatic Speech Recognition and Understanding. IEEE. | 1607.01759#20 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 21 | [Mikolov et al.2013] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efï¬cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval.
[Schutze1992] Hinrich Schutze. 1992. Dimensions of meaning. In Supercomputing.
[Tang et al.2015] Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classiï¬cation. In EMNLP. [Thomee et al.2016] Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Dou- 2016. glas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. vol- ume 59, pages 64â73. ACM.
[Wang and Manning2012] Sida Wang and Christopher D Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classiï¬cation. In ACL. | 1607.01759#21 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.01759 | 22 | [Wang and Manning2012] Sida Wang and Christopher D Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classiï¬cation. In ACL.
[Weinberger et al.2009] Kilian Weinberger, Anirban Das- gupta, John Langford, Alex Smola, and Josh Atten- berg. 2009. Feature hashing for large scale multitask learning. In ICML.
[Weston et al.2011] Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. Wsabie: Scaling up to large vocabulary image annotation. In IJCAI.
[Weston et al.2014] Jason Weston, Sumit Chopra, and Keith Adams. 2014. #tagspace: Semantic embed- dings from hashtags. In EMNLP.
[Xiao and Cho2016] Yijun Xiao and Kyunghyun Cho. 2016. Efï¬cient character-level document classiï¬cation by combining convolution and recurrent layers. arXiv preprint arXiv:1602.00367.
[Zhang and LeCun2015] Xiang Zhang and Yann LeCun. 2015. Text understanding from scratch. arXiv preprint arXiv:1502.01710. | 1607.01759#22 | Bag of Tricks for Efficient Text Classification | This paper explores a simple and efficient baseline for text classification.
Our experiments show that our fast text classifier fastText is often on par
with deep learning classifiers in terms of accuracy, and many orders of
magnitude faster for training and evaluation. We can train fastText on more
than one billion words in less than ten minutes using a standard multicore~CPU,
and classify half a million sentences among~312K classes in less than a minute. | http://arxiv.org/pdf/1607.01759 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov | cs.CL | null | null | cs.CL | 20160706 | 20160809 | [
{
"id": "1606.01781"
},
{
"id": "1607.01759"
},
{
"id": "1502.01710"
},
{
"id": "1602.00367"
}
] |
1607.00036 | 0 | 7 1 0 2
r a M 7 1 ] G L . s c [
2 v 6 3 0 0 0 . 7 0 6 1 : v i X r a
# Dynamic Neural Turing Machine with Continuous and Discrete Addressing Schemes Caglar Gulcehre1, Sarath Chandar1, Kyunghyun Cho2, Yoshua Bengio1 1University of Montreal, [email protected] 2New York University, [email protected]
Keywords: neural networks, memory, neural Turing machines, natural language processing
# Abstract | 1607.00036#0 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 1 | Keywords: neural networks, memory, neural Turing machines, natural language processing
# Abstract
We extend neural Turing machine (NTM) model into a dynamic neural Turing ma- chine (D-NTM) by introducing a trainable memory addressing scheme. This address- ing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both continuous, differentiable and discrete, non-differentiable read/write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU- controller. The D-NTM is evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM baselines. We have done extensive analysis of our model and different variations of NTM on bAbI task. We also provide further experimental results on sequential pMNIST, Stanford Natural Language Inference, associative recall and copy tasks.
# Introduction
1 | 1607.00036#1 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 2 | # Introduction
1
Designing of general-purpose learning algorithms is one of the long-standing goals of artiï¬cial intelligence. Despite the success of deep learning in this area (see, e.g., (Good- fellow et al., 2016)) there are still a set of complex tasks that are not well addressed by conventional neural network based models. Those tasks often require a neural network to be equipped with an explicit, external memory in which a larger, potentially un- bounded, set of facts need to be stored. They include, but are not limited to, episodic question-answering (Weston et al., 2015b; Hermann et al., 2015; Hill et al., 2015), com- pact algorithms (Zaremba et al., 2015), dialogue (Serban et al., 2016; Vinyals and Le, 2015) and video caption generation (Yao et al., 2015).
1 | 1607.00036#2 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 3 | 1
Recently two promising approaches that are based on neural networks for this type of tasks have been proposed. Memory networks (Weston et al., 2015b) explicitly store all the facts, or information, available for each episode in an external memory (as con- tinuous vectors) and use the attention-based mechanism to index them when returning an output. On the other hand, neural Turing machines (NTM, (Graves et al., 2014)) read each fact in an episode and decides whether to read, write the fact or do both to the external, differentiable memory.
A crucial difference between these two models is that the memory network does not have a mechanism to modify the content of the external memory, while the NTM does. In practice, this leads to easier learning in the memory network, which in turn resulted in that it being used more in realistic tasks (Bordes et al., 2015; Dodge et al., 2015). On the contrary, the NTM has mainly been tested on a series of small-scale, carefully-crafted tasks such as copy and associative recall. However, NTM is more expressive, precisely because it can store and modify the internal state of the network as it processes an episode and we were able to use it without any modiï¬cations on the model for different tasks. | 1607.00036#3 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 4 | The original NTM supports two modes of addressing (which can be used simulta- neously.) They are content-based and location-based addressing. We notice that the location-based strategy is based on linear addressing. The distance between each pair of consecutive memory cells is ï¬xed to a constant. We address this limitation, in this paper, by introducing a learnable address vector for each memory cell of the NTM with least recently used memory addressing mechanism, and we call this variant a dynamic neural Turing machine (D-NTM).
We evaluate the proposed D-NTM on the full set of Facebook bAbI task (We- ston et al., 2015b) using either continuous, differentiable attention or discrete, non- differentiable attention (Zaremba and Sutskever, 2015) as an addressing strategy. Our experiments reveal that it is possible to use the discrete, non-differentiable attention mechanism, and in fact, the D-NTM with the discrete attention and GRU controller outperforms the one with the continuous attention. We also provide results on sequen- tial pMNIST, Stanford Natural Language Inference (SNLI) task and algorithmic tasks proposed by (Graves et al., 2014) in order to investigate the ability of our model when dealing with long-term dependencies.
We summarize our contributions in this paper as below, | 1607.00036#4 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 5 | We summarize our contributions in this paper as below,
⢠We propose a variation of neural Turing machine called a dynamic neural Turing machine (D-NTM) which employs a learnable and location-based addressing.
⢠We demonstrate the application of neural Turing machines on more natural and less toyish tasks, episodic question-answering, natural language entailment, digit classiï¬cation from the pixes besides the toy tasks. We provide a detailed analysis of our model on the bAbI task.
⢠We propose to use the discrete attention mechanism and empirically show that, it can outperform the continuous attention based addressing for episodic QA task.
⢠We propose a curriculum strategy for our model with the feedforward controller and discrete attention that improves our results signiï¬cantly.
2
In this paper, we avoid doing architecture engineering for each task we work on and focus on pure modelâs overall performance on each without task-speciï¬c modiï¬cations on the model. In that respect, we mainly compare our model against similar models such as NTM and LSTM without task-speciï¬c modiï¬cations. This helps us to better understand the modelâs failures. | 1607.00036#5 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 6 | The remainder of this article is organized as follows. In Section 2, we describe the architecture of Dynamic Neural Turing Machine (D-NTM). In Section 3, we describe the proposed addressing mechanism for D-NTM. Section 4 explains the training pro- cedure. In Section 5, we brieï¬y discuss some related models. In Section 6, we report results on episodic question answering task. In Section 7, 8, and 9 we discuss the re- sults in sequential MNIST, SNLI, and algorithmic toy tasks respectively. Section 10 concludes the article.
# 2 Dynamic Neural Turing Machine
The proposed dynamic neural Turing machine (D-NTM) extends the neural Turing ma- chine (NTM, (Graves et al., 2014)) which has a modular design. The D-NTM consists of two main modules: a controller, and a memory. The controller, which is often imple- mented as a recurrent neural network, issues a command to the memory so as to read, write to and erase a subset of memory cells.
# 2.1 Memory
D-NTM consists of an external memory Mt, where each memory cell i in Mt[i] is partitioned into two parts: a trainable address vector At[i] â R1Ãda and a content vector Ct[i] â R1Ãdc. | 1607.00036#6 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 7 | Mt[i] = [At[i]; Ct[i]] .
Memory Mt consists of N such memory cells and hence represented by a rectangular matrix Mt â RN Ã(dc+da):
Mt = [At; Ct] . The ï¬rst part At â RN Ãda is a learnable address matrix, and the second Ct â RN Ãdc a content matrix. The address part At is considered a model parameter that is updated during training. During inference, the address part is not overwritten by the controller and remains constant. On the other hand, the content part Ct is both read and written by the controller both during training and inference. At the beginning of each episode, the content part of the memory is refreshed to be an all-zero matrix, C0 = 0. This introduction of the learnable address portion for each memory cell allows the model to learn sophisticated location-based addressing strategies.
# 2.2 Controller
At each timestep t, the controller (1) receives an input value xt, (2) addresses and reads the memory and creates the content vector rt, (3) erases/writes a portion of the memory, (4) updates its own hidden state ht, and (5) outputs a value yt (if needed.) In this
3 | 1607.00036#7 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 8 | 3
paper, we use both a gated recurrent unit (GRU, (Cho et al., 2014)) and a feedforward- controller to implement the controller such that for a GRU controller
ht = GRU(xt, htâ1, rt) (1)
and for a feedforward-controller
ht = Ï(xt, rt). (2)
# 2.3 Model Operation
At each timestep t, the controller receives an input value xt. Then it generates the read weights wr t , the content vector read from the memory rt â R(da+dc)Ã1 is computed as
ry = (M,) wy,
(3)
The hidden state of the controller (ht) is conditioned on the memory content vector rt and based on this current hidden state of the controller. The model predicts the output label yt for the input. | 1607.00036#8 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 9 | The controller also updates the memory by erasing the old content and writing a new content into the memory. The controller computes three vectors: erase vector et â RdcÃ1, write weights ww t â RN Ã1, and candidate memory content vector ¯ct â RdcÃ1. These vectors are used to modify the memory. Erase vector is computed by a simple MLP which is conditioned on the hidden state of the controller ht. The candidate memory content vector ¯ct is computed based on the current hidden state of the controller ht â RdhÃ1 and the input of the controller which is scaled by a scalar gate αt. The αt is a function of the hidden state and the input of the controller.
a= f (i, Xz), (4)
# αt = f (ht, xt), ¯ct = ReLU(Wmht + αtWxxt).
C, = ReLU(W,,h; + a; W--Xt). (5)
where Wm and Wx are trainable matrices and ReLU is the rectiï¬ed linear activation function (Nair and Hinton, 2010). Given the erase, write and candidate memory content vectors (et, ww t , and ¯ct respectively), the memory matrix is updated by, | 1607.00036#9 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 10 | C,[j] = (1 â erw;?[9]) © Craly] + wP Ue. (6)
where the index j in Ct[j] denotes the j-th row of the content matrix Ct of the memory matrix Mt.
No Operation (NOP) As found in (Joulin and Mikolov, 2015), an additional NOP operation can be useful for the controller not to access the memory only once in a while. We model this situation by designating one memory cell as a NOP cell to which the controller should access when it does not need to read or write into the memory. Because reading from or writing into this memory cell is completely ignored.
We illustrate and elaborate more on the read and write operations of the D-NTM in Figure 1.
t are the most crucial parts of the model since the controller decide where to read from and write into the memory by using those. We elaborate this in the next section.
4
(4) (5)
Story Controller Memory : Address 1 Content - Address 2 Content Fact t-1 J++ }e( } Address 3 Content 4 Address 4, Content aaa = ââ | [Ant Address 5 Content on re Address 6 Content Fact t O-O-O- Question O-O-O- ââ_1__ address Z| Contd Reader | ââ Content | 1607.00036#10 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 11 | Figure 1: A graphical illustration of the proposed dynamic neural Turing machine with the recurrent-controller. The controller receives the fact as a continuous vector encoded by a recurrent neural network, computes the read and write weights for addressing the memory. If the D-NTM automatically detects that a query has been received, it returns an answer and terminates.
# 3 Addressing Mechanism
Each of the address vectors (both read and write) is computed in similar ways. First, the controller computes a key vector:
k, = Wh, + by,
Both for the read and the write operations, kt â R(da+dc)Ã1. Wk â R(da+dc)ÃN and bk â R(da+dc)Ã1 are the learnable weight matrix and bias respectively of kt. Also, the sharpening factor βt â R ⥠1 is computed as follows:
B= softplus(uj hâ +bg) +1. (7)
where uβ and bβ are the parameters of the sharpening factor βt and softplus is deï¬ned as follows:
softplus(x) = log(exp(x) + 1) (8)
Given the key kt and sharpening factor βt, the logits for the address weights are then computed by,
zt[i] = βtS (kt, Mt[i]) (9) | 1607.00036#11 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 12 | zt[i] = βtS (kt, Mt[i]) (9)
where the similarity function is basically the cosine distance where it is deï¬ned as S (x, y) â R and 1 ⥠S (x, y) ⥠â1,
x-y S09) ~ Teil
⬠is a small positive value to avoid division by zero. We have used ⬠= le â 7 in all our experiments. The address weight generation which we have described in this section is same with the content based addressing mechanism proposed in (Graves et al., 2014).
5
# 3.1 Dynamic Least Recently Used Addressing
We introduce a memory addressing operation that can learn to put more emphasis on the least recently used (LRU) memory locations. As observed in (Santoro et al., 2016; Rae et al., 2016), we ï¬nd it easier to learn the write operations with the use of LRU addressing. | 1607.00036#12 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 13 | To learn a LRU based addressing, ï¬rst we compute the exponentially moving av- erages of the logits (zt) as vt, where it can be computed as vt = 0.1vtâ1 + 0.9zt. We rescale the accumulated vt with γt, such that the controller adjusts the inï¬uence of how much previously written memory locations should effect the attention weights of a particular time-step. Next, we subtract vt from zt in order to reduce the weights of previously read or written memory locations. γt is a shallow MLP with a scalar output and it is conditioned on the hidden state of the controller. γt is parametrized with the parameters uγ and bγ,
"= sigmoid(u] hy +b,), w; = softmax(z; â Â¥Vi-1)(10)
(11) | 1607.00036#13 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 14 | "= sigmoid(u] hy +b,), w; = softmax(z; â Â¥Vi-1)(10)
(11)
This addressing method increases the weights of the least recently used rows of the memory. The magnitude of the inï¬uence of the least-recently used memory locations is being learned and adjusted with γt. Our LRU addressing is dynamic due to the modelâs ability to switch between pure content-based addressing and LRU. During the training, we do not backpropagate through vt. Due to the dynamic nature of this addressing mechanism, it can be used for both read and write operations. If needed, the model will automatically learn to disable LRU while reading from the memory.
The address vector deï¬ned in Equation (11) is a continuous vector. This makes the addressing operation differentiable and we refer to such a D-NTM as continuous D-NTM.
# 3.2 Discrete Addressing
By deï¬nition in Eq. (11), every element in the address vector wt is positive and sums up to one. In other words, we can treat this vector as the probabilities of a categorical distribution C(wt) with dim(wt) choices:
p[j] = wt[j], | 1607.00036#14 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 15 | p[j] = wt[j],
where wt[j] is the j-th element of wt. We can readily sample from this categorical distribution and form an one-hot vector Ëwt such that
Ëwt[k] = I(k = j),
where j â¼ C(w), and I is an indicator function. If we use Ëwt instead of wt, then we will read and write from only one memory cell at a time. This makes the addressing operation non-differentiable and we refer to such a D-NTM as discrete D-NTM. In discrete D-NTM we sample the one-hot vector during training. Once training is over, we switch to a deterministic strategy. We simply choose an element of wt with the largest value to be the index of the target memory cell, such that
Ëwt[k] = I(k = argmax(wt)).
6
# 3.3 Multi-step Addressing
At each time-step, controller may require more than one-step for accessing to the mem- ory. The original NTM addresses this by implementing multiple sets of read, erase and write heads. In this paper, we explore an option of allowing each head to operate more than once at each timestep, similar to the multi-hop mechanism from the end-to-end memory network (Sukhbaatar et al., 2015). | 1607.00036#15 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 16 | # 4 Training D-NTM
Once the proposed D-NTM is executed, it returns the output distribution p(y(n)|x(n) for the nth example that is parameterized with θ. We deï¬ne our cost function as the neg- ative log-likelihood:
N 1 n n n C00) = Hy Lowry iaâ, 8), (12)
where θ is a set of all the parameters of the model.
Continuous D-NTM, just like the original NTM, is fully end-to-end differentiable and hence we can compute the gradient of this cost function by using backpropagation and learn the parameters of the model with a gradient-based optimization algorithm, such as stochastic gradient descent, to train it end-to-end. However, in discrete D- NTM, we use sampling-based strategy for all the heads during training. This clearly makes the use of backpropagation infeasible to compute the gradient, as the sampling procedure is not differentiable.
# 4.1 Training discrete D-NTM
To train discrete D-NTM, we use REINFORCE (Williams, 1992) together with the three variance reduction techniquesâglobal baseline, input-dependent baseline and variance normalizationâ suggested in (Mnih and Gregor, 2014). | 1607.00036#16 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 17 | Let us deï¬ne R(x) = log p(y|x1, . . . , xT ; θ) as a reward. We ï¬rst center and re- scale the reward by,
~ R(x) -l ¢) = BO? Vor+eâ¬
where b and Ï is running average and standard deviation of R. We can further center it for each input x separately, i.e.,
¯R(x) = ËR(x) â b(x),
where b(x) is computed by a baseline network which takes as input x and predicts its estimated reward. The baseline network is trained to minimize the Huber loss (Huber, 1964) between the true reward ËR(x) and the predicted reward b(x). This is also called as input based baseline (IBB) which is introduced in (Mnih and Gregor, 2014).
7
We use the Huber loss to learn the baseline b(x) which is deï¬ned by,
Hδ(z) = z2 for |z| ⤠δ, δ(2|z| â δ), otherwise, | 1607.00036#17 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 18 | Hδ(z) = z2 for |z| ⤠δ, δ(2|z| â δ), otherwise,
due to its robustness where z would be ¯R(x) in this case. As a further measure to reduce the variance, we regularize the negative entropy of all those category distributions to facilitate a better exploration during training (Xu et al., 2015).
Then, the cost function for each training example is approximated as in Equation (13). In this equation, we write the terms related to compute the REINFORCE gradients that includes terms for the entropy regularization on the action space, the likelihood- ratio term to compute the REINFORCE gradients both for the read and the write heads.
C"(0) = â log p(y|xur, Wi.7, Wy) J -»y R(x â)(log p(w x17) + log p(w? |X1-r) j=l a w'|x7) +H(w 4 |[Xur))- (13)
where J is the number of addressing steps, λH is the entropy regularization coefï¬- cient, and H denotes the entropy.
# 4.2 Curriculum Learning for the Discrete Attention | 1607.00036#18 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 19 | # 4.2 Curriculum Learning for the Discrete Attention
Training discrete attention with feedforward controller and REINFORCE is challeng- ing. We propose to use a curriculum strategy for training with the discrete attention in order to tackle this problem. For each minibatch, the controller stochastically decides to choose either to use the discrete or continuous weights based on the random variable Ïn with probability pn where n stands for the number of k minibatch updates such that we only update pn every k minibatch updates. Ïn is a Bernoulli random variable which is sampled with probability of pn, Ïn â¼ Bernoulli(pn). The model will either use the discrete or the continuous-attention based on the Ïn. We start the training procedure with p0 = 1 and during the training pn is annealed to 0 by setting pn = p0â
We can rewrite the weights wt as in Equation (14), where it is expressed as the combination of continuous attention weights ¯wt and discrete attention weights Ëwt with Ït being a binary variable that chooses to use one of them,
wt = Ïn ¯wt + (1 â Ïn) Ëwt. (14) | 1607.00036#19 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 20 | wt = Ïn ¯wt + (1 â Ïn) Ëwt. (14)
By using this curriculum learning strategy, at the beginning of the training, the model learns to use the memory mainly with the continuous attention. As we anneal the pt, the model will rely more on the discrete attention.
8
# 4.3 Regularizing D-NTM
If the controller of D-NTM is a recurrent neural network, we ï¬nd it to be important to regularize the training of the D-NTM so as to avoid suboptimal solutions in which the D-NTM ignores the memory and works as a simple recurrent neural network.
Read-Write Consistency Regularizer One such suboptimal solution we have ob- served in our preliminary experiments with the proposed D-NTM is that the D-NTM uses the address part A of the memory matrix simply as an additional weight matrix, rather than as a means to accessing the content part C. We found that this pathologi- cal case can be effectively avoided by encouraging the read head to point to a memory cell which has also been pointed by the write head. This can be implemented as the following regularization term:
T t y 1 WwW T Rrw(w",w") = SOIL = (Dwi wi? (15) © t= t=1 | 1607.00036#20 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 21 | T t y 1 WwW T Rrw(w",w") = SOIL = (Dwi wi? (15) © t= t=1
In the equations above, ww t is the write and wr t is the read weights.
Next Input Prediction as Regularization Temporal structure is a strong signal that should be exploited by the controller based on a recurrent neural network. We exploit this structure by letting the controller predict the input in the future. We maximize the predictability of the next input by the controller during training. This is equivalent to minimizing the following regularizer:
T Rprea(W) = > log p(Xt41|X, wi, Ww, e:, Mi; 0) t=0
where xt is the current input and xt+1 is the input at the next timestep. We ï¬nd this regularizer to be effective in our preliminary experiments and use it for bAbI tasks.
# 5 Related Work | 1607.00036#21 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 22 | # 5 Related Work
A recurrent neural network (RNN), which is used as a controller in the proposed D- NTM, has an implicit memory in the form of recurring hidden states. Even with this implicit memory, a vanilla RNN is however known to have difï¬culties in storing in- formation for long time-spans (Bengio et al., 1994; Hochreiter, 1991). Long short-term memory (LSTM, (Hochreiter and Schmidhuber, 1997)) and gated recurrent units (GRU, (Cho et al., 2014)) have been found to address this issue. However all these models based solely on RNNs have been found to be limited when they are used to solve, e.g., algorithmic tasks and episodic question-answering.
In addition to the ï¬nite random access memory of the neural Turing machine, based on which the D-NTM is designed, other data structures have been proposed as external memory for neural networks. In (Sun et al., 1997; Grefenstette et al., 2015; Joulin and Mikolov, 2015), a continuous, differentiable stack was proposed. In (Zaremba et al.,
9 | 1607.00036#22 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 23 | 9
2015; Zaremba and Sutskever, 2015), grid and tape storage are used. These approaches differ from the NTM in that their memory is unbounded and can grow indeï¬nitely. On the other hand, they are often not randomly accessible. Zhang et al. (2015) proposed a variation of NTM that has a structured memory and they have shown experiments on copy and associative recall tasks with this model.
In parallel to our work (Yang, 2016) and (Graves et al., 2016) proposed new memory access mechanisms to improve NTM type of models. (Graves et al., 2016) reported superior results on a diverse set of algorithmic learning tasks. | 1607.00036#23 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 24 | Memory networks (Weston et al., 2015b) form another family of neural networks with external memory. In this class of neural networks, information is stored explicitly as it is (in the form of its continuous representation) in the memory, without being erased or modiï¬ed during an episode. Memory networks and their variants have been applied to various tasks successfully (Sukhbaatar et al., 2015; Bordes et al., 2015; Dodge et al., 2015; Xiong et al., 2016; Chandar et al., 2016). Miller et al. (2016) have also indepen- dently proposed the idea of having separate key and value vectors for memory networks. A similar addressing mechanism is also explored in (Reed and de Freitas, 2016) in the context of learning program traces.
Another related family of models is the attention-based neural networks. Neural networks with continuous or discrete attention over an input have shown promising results on a variety of challenging tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), speech recognition (Chorowski et al., 2015), machine reading comprehension (Hermann et al., 2015) and image caption generation (Xu et al., 2015). The latter two, the memory network and attention-based networks, are however clearly distinguishable from the D-NTM by the fact that they do not modify the content of the memory. | 1607.00036#24 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 25 | # 6 Experiments on Episodic Question-Answering
In this section, we evaluate the proposed D-NTM on the synthetic episodic question- answering task called Facebook bAbI (Weston et al., 2015a). We use the version of the dataset that contains 10k training examples per sub-task provided by Facebook.1 For each episode, the D-NTM reads a sequence of factual sentences followed by a question, all of which are given as natural language sentences. The D-NTM is expected to store and retrieve relevant information in the memory in order to answer the question based on the presented facts.
# 6.1 Model and Training Details
We use the same hyperparameters for all the tasks for a given model. We use a recurrent neural network with GRU units to encode a variable-length fact into a ï¬xed-size vec- tor representation. This allows the D-NTM to exploit the word ordering in each fact, unlike when facts are encoded as bag-of-words vectors. We experiment with both a recurrent and feedforward neural network as the controller that generates the read and
# 1 https://research.facebook.com/researchers/1543934539189348
10 | 1607.00036#25 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 26 | # 1 https://research.facebook.com/researchers/1543934539189348
10
write weights. The controller has 180 units. We train our feedforward controller using noisy-tanh activation function (Gulcehre et al., 2016) since we were experiencing train- ing difï¬culties with sigmoid and tanh activation functions. We use both single-step and three-steps addressing with our GRU controller. The memory contains 120 memory cells. Each memory cell consists of a 16-dimensional address part and 28-dimensional content part.
We set aside a random 10% of the training examples as a validation set for each sub-task and use it for early-stopping and hyperparameter search. We train one D-NTM for each sub-task, using Adam (Kingma and Ba, 2014) with its learning rate set to 0.003 and 0.007 respectively for GRU and feedforward controller. The size of each minibatch is 160, and each minibatch is constructed uniform-randomly from the training set.
# 6.2 Goals | 1607.00036#26 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 27 | # 6.2 Goals
The goal of this experiment is three-fold. First, we present for the ï¬rst time the per- formance of a memory-based network that can both read and write dynamically on the Facebook bAbI tasks2. We aim to understand whether a model that has to learn to write an incoming fact to the memory, rather than storing it as it is, is able to work well, and to do so, we compare both the original NTM and proposed D-NTM against an LSTM-RNN.
Second, we investigate the effect of having to learn how to write. The fact that the NTM needs to learn to write likely has adverse effect on the overall performance, when compared to, for instance, end-to-end memory networks (MemN2N, (Sukhbaatar et al., 2015)) and dynamic memory network (DMN+, (Xiong et al., 2016)) both of which simply store the incoming facts as they are. We quantify this effect in this experiment. Lastly, we show the effect of the proposed learnable addressing scheme.
We further explore the effect of using a feedforward controller instead of the GRU controller. In addition to the explicit memory, the GRU controller can use its own internal hidden state as the memory. On the other hand, the feedforward controller must solely rely on the explicit memory, as it is the only memory available. | 1607.00036#27 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 28 | # 6.3 Results and Analysis
In Table 1, we ï¬rst observe that the NTMs are indeed capable of solving this type of episodic question-answering better than the vanilla LSTM-RNN. Although the avail- ability of explicit memory in the NTM has already suggested this result, we note that this is the ï¬rst time neural Turing machines have been used in this speciï¬c task.
All the variants of NTM with the GRU controller outperform the vanilla LSTM- RNN. However, not all of them perform equally well. First, it is clear that the proposed dynamic NTM (D-NTM) using the GRU controller outperforms the original NTM with the GRU controller (NTM, CBA only NTM vs. continuous D-NTM, Discrete D-NTM). As discussed earlier, the learnable addressing scheme of the D-NTM allows the con- troller to access the memory slots by location in a potentially nonlinear way. We expect
2Similar experiments were done in the recently published (Graves et al., 2016), but D-NTM results
for bAbI tasks were already available in arxiv by that time.
11 | 1607.00036#28 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 29 | Task 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Avg.Err. LSTM 0.00 81.90 83.10 0.20 1.20 51.80 24.90 34.10 20.20 30.10 10.30 23.40 6.10 81.00 78.70 51.90 50.10 6.80 90.30 2.10 36.41 MemN2N 0.00 0.30 2.10 0.00 0.80 0.10 2.00 0.90 0.30 0.00 0.10 0.00 0.00 0.10 0.00 51.80 18.60 5.30 2.30 0.00 4.24 DMN+ 0.00 0.30 1.10 0.00 0.50 0.00 2.40 0.00 0.00 0.00 0.00 0.00 0.00 0.20 0.00 45.30 4.20 2.10 0.00 0.00 2.81 1-step LBAâ NTM 16.30 57.08 74.16 0.00 1.46 23.33 21.67 25.76 24.79 41.46 18.96 25.83 6.67 58.54 36.46 71.15 43.75 3.96 75.89 1.25 31.42 | 1607.00036#29 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 30 | 24.79 41.46 18.96 25.83 6.67 58.54 36.46 71.15 43.75 3.96 75.89 1.25 31.42 1-step CBA NTM 16.88 55.70 55.00 0.00 20.41 21.04 21.67 21.05 24.17 33.13 31.88 30.00 5.63 59.17 42.30 71.15 43.75 47.50 71.51 0.00 33.60 1-step Soft D-NTM 5.41 58.54 74.58 0.00 1.66 40.20 19.16 12.58 36.66 52.29 31.45 7.70 5.62 60.00 36.87 49.16 17.91 3.95 73.74 2.70 29.51 1-step Discrete D-NTM 6.66 56.04 72.08 0.00 1.04 44.79 19.58 18.46 34.37 50.83 4.16 6.66 2.29 63.75 39.27 51.35 16.04 3.54 64.63 3.12 27.93 3-steps LBAâ NTM 0.00 61.67 83.54 0.00 0.83 48.13 7.92 25.38 37.80 56.25 3.96 28.75 | 1607.00036#30 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 31 | NTM 0.00 61.67 83.54 0.00 0.83 48.13 7.92 25.38 37.80 56.25 3.96 28.75 5.83 61.88 35.62 46.15 43.75 47.50 61.56 0.40 32.85 3-steps CBA NTM 0.00 59.38 65.21 0.00 1.46 54.80 37.70 8.82 0.00 23.75 0.28 23.75 83.13 57.71 21.88 50.00 56.25 47.50 63.65 0.00 32.76 3-steps Soft D-NTM 0.00 46.66 47.08 0.00 1.25 20.62 7.29 11.02 39.37 20.00 30.62 5.41 7.91 58.12 36.04 46.04 21.25 6.87 75.88 3.33 24.24 3-steps Discrete D-NTM 0.00 62.29 41.45 0.00 1.45 11.04 5.62 0.74 32.50 20.83 16.87 4.58 5.00 60.20 40.26 45.41 9.16 1.66 76.66 0.00 21.79 | 1607.00036#31 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.