doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1606.04460 | 23 | Figure 2: High-resolution screenshots of the Labyrinth environments. (a) Forage and Avoid showing the apples (positive rewards) and lemons (negative rewards). (b) Double T-maze showing cues at the turning points. (c) Top view of a Double T-maze conï¬guration. The cues indicate the reward is located at the top left.
state was discarded. The k-nearest-neighbour lookups used k = 50. The discount rate was set to 7 = 0.99. Exploration is achieved by using an e-greedy policy with « = 0.005. As a baseline, we used A3C [22]. Labyrinth levels have deterministic transitions and rewards, but the initial location and facing direction are randomised, and the environment is much richer, being 3-dimensional. For this reason, unlike Atari, experiments on Labyrinth encounter very few exact matches in the buffers of QF°-values; less than 0.1% in all three levels. | 1606.04460#23 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 23 | In all these examples we can see that the LSTM optimizer learns much more quickly than the baseline optimizers, with signiï¬cant boosts in performance for the CIFAR-5 and especially CIFAR-2 datsets. We also see that the optimizers trained only on a disjoint subset of the data is hardly effected by this difference and transfers well to the additional dataset.
# 3.4 Neural Art
The recent work on artistic style transfer using convolutional networks, or Neural Art [Gatys et al., 2015], gives a natural testbed for our method, since each content and style image pair gives rise to a different optimization problem. Each Neural Art problem starts from a content image, c, and a style image, s, and is given by
f (θ) = αLcontent(c, θ) + βLstyle(s, θ) + γLreg(θ)
The minimizer of f is the styled image. The ï¬rst two terms try to match the content and style of the styled image to that of their ï¬rst argument, and the third term is a regularizer that encourages smoothness in the styled image. Details can be found in [Gatys et al., 2015].
7 | 1606.04474#23 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 24 | the concatenated vector et to a vector with 1/4 di- mension size, denoted by the (fully connected) block âfcâ in Fig. 2. Decoder: The decoder follows Eq. 5 and Eq. 6 with ï¬xed direction term â1. At the ï¬rst layer, we use the following xt:
xt = [ct, ytâ1] (10)
ytâ1 is the target word embedding at the previous time step and y0 is zero. There is a single column of nd stacked LSTM layers. We also use the F-F connections like those in the encoder and all layers are in the forward direction. Note that at the last LSTM layer, we only use ht to make the prediction with a softmax layer.
Although the network is deep, the training tech- nique is straightforward. We will describe this in the next part.
# 3.2 Training technique
We take the parallel data as the only input without using any monolingual data for either word repre- sentation pre-training or language modeling. Be- cause of the deep bi-directional structure, we do not need to reverse the sequence order as Sutskever et al. (2014). | 1606.04199#24 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 24 | Each level is progressively more difï¬cult. The ï¬rst level, called Forage, requires the agent to collect apples as quickly as possible by walking through them. Each apple provides a reward of 1. A simple policy of turning until an apple is seen and then moving towards it sufï¬ces here. Figure 1 shows that the episodic controller found an apple seeking policy very quickly. Eventually A3C caught up, and ï¬nal outperforms the episodic controller with a more efï¬cient strategy for picking up apples.
The second level, called Forage and Avoid involves collecting apples, which provide a reward of 1, while avoiding lemons which incur a reward of â1. The level is depicted in Figure 2(a). This level requires only a slightly more complicated policy then Forage (same policy plus avoid lemons) yet A3C took over 40 million steps to the same performance that episodic control attained in fewer than 3 million frames. | 1606.04460#24 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 24 | 7
Neural art, training resolution Double resolution =-- ADAM === RMSprop a === SGD 3 --- NAG â STM 20 40 60 80 100 120 20 40 60 80 100 120 Step Step
Figure 8: Optimization curves for Neural Art. Content images come from the test set, which was not used during the LSTM optimizer training. Note: the y-axis is in log scale and we zoom in on the interesting portion of this plot. Left: Applying the training style at the training resolution. Right: Applying the test style at double the training resolution. | 1606.04474#24 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 25 | The deep topology brings difï¬culties for the model training, especially when ï¬rst order methods such as stochastic gradient descent (SGD) (LeCun et al., 1998) are used. The parameters should be properly initialized and the converging process can be slow. We tried several optimization techniques such as AdaDelta (Zeiler, 2012), RMSProp (Tiele- man and Hinton, 2012) and Adam (Kingma and Ba, 2015). We found that all of them were able to speed up the process a lot compared to simple SGD while no signiï¬cant performance difference was ob- served among them. In this work, we chose Adam for model training and do not present a detailed com- parison with other optimization methods.
Dropout (Hinton et al., 2012) is also used to avoid It is utilized on the LSTM nodes hk over-ï¬tting. t (See Eq. 5) with a ratio of pd for both the encoder and decoder.
During the whole model training process, we keep all hyper parameters ï¬xed without any intermediate interruption. The hyper parameters are selected ac- cording to the performance on the development set.
For such a deep and large network, it is not easy to determine the tuning strategy and this will be con- sidered in future work.
# 3.3 Generation | 1606.04199#25 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 25 | The third level, called Double-T-Maze, requires the agent to walk in a maze with four ends (a map is shown in Figure 2(c)) one of the ends contains an apple, while the other three contain lemons. At each intersection the agent is presented with a colour cue that indicates the direction in which the apple is located (see Figure 2(b)): left, if red, or right, if green. If the agent walks through a lemon it incurs a reward of â1. However, if it walks through the apple, it receives a reward of 1, is teleported back to the starting position and the location of the apple is resampled. The duration of an episode is limited to 1 minute in which it can reach the apple multiple times if it solves the task fast enough. Double-T-Maze is a difï¬cult RL problem: rewards are sparse. In fact, A3C never achieved an expected reward above zero. Due to the sparse reward nature of the Double T-Maze level, A3C did not update the policy strongly enough in the few instances in which a reward is encountered through random diffusion in the state space. In contrast, the episodic controller exhibited behaviour akin to one-shot learning on these instances, and was | 1606.04460#25 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 25 | Figure 9: Examples of images styled using the LSTM optimizer. Each triple consists of the content image (left), style (right) and image generated by the LSTM optimizer (center). Left: The result of applying the training style at the training resolution to a test image. Right: The result of applying a new style to a test image at double the resolution on which the optimizer was trained.
We train optimizers using only 1 style and 1800 content images taken from ImageNet [Deng et al., 2009]. We randomly select 100 content images for testing and 20 content images for validation of trained optimizers. We train the optimizer on 64x64 content images from ImageNet and one ï¬xed style image. We then test how well it generalizes to a different style image and higher resolution (128x128). Each image was optimized for 128 steps and trained optimizers were unrolled for 32 steps. Figure 9 shows the result of styling two different images using the LSTM optimizer. The LSTM optimizer uses inputs preprocessing described in Appendix A and no postprocessing. See Appendix C for additional images. | 1606.04474#25 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 26 | For such a deep and large network, it is not easy to determine the tuning strategy and this will be con- sidered in future work.
# 3.3 Generation
We use the common left-to-right beam-search method for sequence generation. At each time step t, the word yt can be predicted by:
Ëyt = arg max y P(y|Ëy0:tâ1, x; θ) (11)
where Ëyt is the predicted target word. Ëy0:tâ1 is the generated sequence from time step 0 to t â 1. We keep nb best candidates according to Eq. 11 at each time step, until the end of sentence mark is gener- ated. The hypotheses are ranked by the total like- lihood of the generated sequence, although normal- ized likelihood is used in some works (Jean et al., 2015).
# 4 Experiments
We evaluate our method mainly on the widely used WMTâ14 English-to-French translation task. In or- der to validate our model on more difï¬cult lan- guage pairs, we also provide results on the WMTâ14 English-to-German translation task. Our models are implemented in the PADDLE (PArallel Distributed Deep LEarning) platform.
# 4.1 Data sets | 1606.04199#26 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 26 | is encountered through random diffusion in the state space. In contrast, the episodic controller exhibited behaviour akin to one-shot learning on these instances, and was able to learn from the very few episodes that contain any rewards different from zero. This allowed the episodic controller to observe between 20 and 30 million frames to learn a policy with positive expected reward, while the parametric policies never learnt a policy with expected reward higher than zero. In this case, episodic control thrived in sparse reward environment as it rapidly latched onto an effective strategy. | 1606.04460#26 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 26 | Figure 8 compares the performance of the LSTM optimizer to standard optimization algorithms. The LSTM optimizer outperforms all standard optimizers if the resolution and style image are the same as the ones on which it was trained. Moreover, it continues to perform very well when both the resolution and style are changed at test time.
Finally, in Appendix B we qualitatively examine the behavior of the step directions generated by the learned optimizer.
# 4 Conclusion
We have shown how to cast the design of optimization algorithms as a learning problem, which enables us to train optimizers that are specialized to particular classes of functions. Our experiments have conï¬rmed that learned neural optimizers compare favorably against state-of-the-art optimization methods used in deep learning. We witnessed a remarkable degree of transfer, with for example the LSTM optimizer trained on 12,288 parameter neural art tasks being able to generalize to tasks with 49,152 parameters, different styles, and different content images all at the same time. We observed similar impressive results when transferring to different architectures in the MNIST task.
The results on the CIFAR image labeling task show that the LSTM optimizers outperform hand- engineered optimizers when transferring to datasets drawn from the same data distribution.
# References | 1606.04474#26 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 27 | # 4.1 Data sets
For both tasks, we use the full WMTâ14 parallel cor- pus as our training data. The detailed data sets are listed below:
⢠English-to-French: Europarl v7, Common Crawl, UN, News Commentary, Gigaword
⢠English-to-German: Europarl v7, Common Crawl, News Commentary
In total, the English-to-French corpus includes 36 million sentence pairs, and the English-to-German corpus includes 4.5 million sentence pairs. The news-test-2012 and news-test-2013 are concate- nated as our development set, and the news-test- 2014 is the test set. Our data partition is consistent with previous works on NMT (Luong et al., 2015; Jean et al., 2015) to ensure fair comparison. | 1606.04199#27 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 27 | # 4.3 Effect of number of nearest neighbours on ï¬nal score
Finally, we compared the effect of varying k (the number of nearest neighbours) on both Labyrinth and Atari tasks using VAE features. In our experiments above, we noticed that on Atari re-visiting the same state was common, and that random projections typically performed the same or better than VAE features. One further interesting feature is that the learnt VAEs on Atari games do not yield a higher score as the number of neighbours increases, except on one game, Q*bert, where VAEs perform reasonably well (see Figure 3a). On Labyrinth levels, we observed that the VAEs outperformed random projections and the agent rarely encountered the same state more than once. Interestingly for this case, Figure 3b shows that increasing the number of nearest neighbours has a
7
(a) Atari games. (b) Labyrinth levels.
Figure 3: Effect of number of neighbours, k, on on ï¬nal score (y axis).
signiï¬cant effect on the ï¬nal performance of the agent in Labyrinth levels. This strongly suggests that VAE features provide the episodic control agent with generalisation in Labyrinth.
# 5 Discussion | 1606.04460#27 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 27 | The results on the CIFAR image labeling task show that the LSTM optimizers outperform hand- engineered optimizers when transferring to datasets drawn from the same data distribution.
# References
F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Foundations and Trends in Machine Learning, 4(1):1â106, 2012.
8
S. Bengio, Y. Bengio, and J. Cloutier. On the search for new learning rules for ANNs. Neural Processing Letters, 2(4):26â30, 1995.
Y. Bengio, S. Bengio, and J. Cloutier. Learning a synaptic learning rule. Université de Montréal, Département dâinformatique et de recherche opérationnelle, 1990.
Y. Bengio, N. Boulanger-Lewandowski, and R. Pascanu. Advances in optimizing recurrent networks. International Conference on Acoustics, Speech and Signal Processing, pages 8624â8628. IEEE, 2013.
F. Bobolas. brain-neurons, 2009. URL https://www.flickr.com/photos/fbobolas/3822222947. Cre- ative Commons Attribution-ShareAlike 2.0 Generic. | 1606.04474#27 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 28 | For the source language, we select the most fre- quent 200K words as the input vocabulary. For the target language we select the most frequent 80K French words and the most frequent 160K German words as the output vocabulary. The full vocab- ulary of the German corpus is larger (Jean et al., 2015), so we select more German words to build the target vocabulary. Out-of-vocabulary words are re- placed with the unknown symbol (unk). For com- plete comparison to previous work on the English- to-French task, we also show the results with a smaller vocabulary of 30K input words and 30K out- put words on the sub train set with selected 12M par- allel sequences (Schwenk, 2014; Sutskever et al., 2014; Cho et al., 2014).
# 4.2 Model settings
We have two models as described above, named Deep-ED and Deep-Att. Both models have exactly the same conï¬guration and layer size except the in- terface part P-I. | 1606.04199#28 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 28 | # 5 Discussion
This work tackles a critical deï¬ciency in current reinforcement learning systems, namely their inability to learn in a one-shot fashion. We have presented a fast-learning system based on non-parametric memorisation of experience. We showed that it can learn good policies faster than parametric function approximators. However, it may be overtaken by them at later stages of training. It is our hope that these ideas will ï¬nd application in practical systems, and result in data-efï¬cient model-free methods. These results also provide support for the hypothesis that episodic control could be used by the brain, especially in the early stages of learning in a new environment. Note also that there are situations in which the episodic controller is always expected to outperform. For example, when hiding food for later consumption, some birds (e.g., scrub jays) are better off remembering their hiding spot exactly than searching according to a distribution of likely locations [4]. These considerations support models in which the brain uses multiple control systems and an arbitration mechanism to determine which to act according to at each point in time [5, 16]. | 1606.04460#28 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 28 | N. E. Cotter and P. R. Conwell. Fixed-weight networks can learn. In International Joint Conference on Neural Networks, pages 553â559, 1990.
C. Daniel, J. Taylor, and S. Nowozin. Learning step size controllers for robust neural network training. In Association for the Advancement of Artiï¬cial Intelligence, 2016.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, pages 248â255. IEEE, 2009.
D. L. Donoho. Compressed sensing. Transactions on Information Theory, 52(4):1289â1306, 2006. J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization.
Journal of Machine Learning Research, 12:2121â2159, 2011.
L. A. Feldkamp and G. V. Puskorius. A signal processing framework based on dynamic neural networks with application to problems in adaptation, ï¬ltering, and classiï¬cation. Proceedings of the IEEE, 86(11): 2259â2277, 1998. | 1606.04474#28 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 29 | We use 256 dimensional word embeddings for both the source and target languages. All LSTM layers, including the 2Ãne layers in the encoder and the nd layers in the decoder, have 512 memory cells. The output layer size is the same as the size of the target vocabulary. The dimension of ct is 5120 and 1280 for Deep-ED and Deep-Att respectively. For each LSTM layer, the activation functions for gates, inputs and outputs are sigmoid, tanh, and tanh re- spectively.
Our network is narrow on word embeddings and LSTM layers. Note that in previous work (Sutskever et al., 2014; Bahdanau et al., 2015), 1000 dimensional word embeddings and 1000 di- mensional LSTM layers are used. We also tried larger scale models but did not obtain further im- provements.
# 4.3 Optimization
Note that each LSTM layer includes two parts as described in Eq. 3, feed-forward computation and recurrent computation. Since there are non-linear activations in the recurrent computation, a larger learning rate lr = 5 Ã 10â4 is used, while for the feed-forward computation a smaller learning rate lf = 4 Ã 10â5 is used. Word embeddings and the softmax layer also use this learning rate lf . We refer | 1606.04199#29 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 29 | We have referred to this approach as model-free episodic control to distinguish it from model-based episodic planning. We conjecture that both such strategies may be used by the brain in addition to the better-known habitual and goal-directed systems associated with dorsolateral striatum and prefrontal cortex respectively [5]. The tentative picture to emerge from this work is one in which the amount of time and working memory resources available for decision making is a key determiner of which control strategies are available. When decisions must be made quickly, planning-based approaches are simply not an option. In such cases, the only choice is between the habitual model-free system and the episodic model-free system. When decisions are not so rushed, the planning-based approaches become available and the brain must then arbitrate between planning using semantic (neocortical) information or episodic (hippocampal) information. In both timing regimes, the key determiner of whether to use episodic information or not is how much uncertainty remains in the estimates provided by the slower-to-learn system. This prediction agrees with those of [5, 16] with respect to the statistical trade-offs between systems. It builds | 1606.04460#29 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 29 | L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. arXiv Report 1508.06576, 2015. A. Graves, G. Wayne, and I. Danihkela. Neural Turing machines. arXiv Report 1410.5401, 2014. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997. S. Hochreiter, A. S. Younger, and P. R. Conwell. Learning to learn using gradient descent. In International
Conference on Artiï¬cial Neural Networks, pages 87â94. Springer, 2001.
D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman. Building machines that learn and think like
people. arXiv Report 1604.00289, 2016. | 1606.04474#29 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 30 | all the parameters not used for recurrent computa- tion as non-recurrent part of the model.
Because of the large model size, we use strong L2 regularization to constrain the parameter matrix v in the following way:
v â v â l · (g + r · v) (12)
Here r is the regularization strength, l is the corre- sponding learning rate, g stands for the gradients of v. The two embedding layers are not regularized. All the other layers have the same r = 2.
The parameters of the recurrent computation part are initialized to zero. All non-recurrent parts are randomly initialized with zero mean and standard deviation of 0.07. A detailed guide for setting hyper- parameters can be found in (Bengio, 2012).
The dropout ratio pd is 0.1. In each batch, there are 500 â¼ 800 sequences in our work. The exact number depends on the sequence lengths and model size. We also ï¬nd that larger batch size results in better convergence although the improvement is not large. However, the largest batch size is constrained by the GPU memory. We use 4 â¼ 8 GPU machines (each has 4 K40 GPU cards) running for 10 days to train the full model with parallelization at the data batch level. It takes nearly 1.5 days for each pass. | 1606.04199#30 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 30 | provided by the slower-to-learn system. This prediction agrees with those of [5, 16] with respect to the statistical trade-offs between systems. It builds on their work by highlighting the potential impact of rushed decisions and insufï¬cient working memory resources in accord with [29]. These ideas could be tested experimentally by manipulations of decision timing or working memory, perhaps by orthogonal tasks, and fast measurements of coherence between medial temporal lobe and output structures under different statistical conditions. | 1606.04460#30 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 30 | people. arXiv Report 1604.00289, 2016.
T. Maley. neuron, 2011. URL https://www.flickr.com/photos/taylortotz101/6280077898. Creative Commons Attribution 2.0 Generic.
J. Martens and R. Grosse. Optimizing neural networks with Kronecker-factored approximate curvature. In International Conference on Machine Learning, pages 2408â2417, 2015.
G. L. Nemhauser and L. A. Wolsey. Integer and combinatorial optimization. John Wiley & Sons, 1988. Y. Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2). In Soviet
Mathematics Doklady, volume 27, pages 372â376, 1983.
J. Nocedal and S. Wright. Numerical optimization. Springer Science & Business Media, 2006. M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: The RPROP
algorithm. In International Conference on Neural Networks, pages 586â591, 1993.
T. P. Runarsson and M. T. Jonsson. Evolution and design of distributed learning rules. In IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks, pages 59â63. IEEE, 2000. | 1606.04474#30 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 31 | One thing we want to emphasize here is that our deep model is not sensitive to these settings. Small variation does not affect the ï¬nal performance.
# 4.4 Results
We evaluate the same way as previous NMT works (Sutskever et al., 2014; Luong et al., 2015; Jean et al., 2015). All reported BLEU scores are computed with the multi-bleu.perl1 script which is also used in the above works. The results are for tokenized and case sensitive evaluation.
4.4.1 Single models English-to-French: First we list our single model results on the English-to-French task in Tab. 1. In the ï¬rst block we show the results with the full corpus. The previous best single NMT encoder- decoder model (Enc-Dec) with six layers achieves BLEU=31.5 (Luong et al., 2015). From Deep-ED,
1https://github.com/moses-smt/
mosesdecoder/blob/master/scripts/generic/ multi-bleu.perl | 1606.04199#31 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 31 | 8
# Acknowledgements
We are grateful to Dharshan Kumaran and Koray Kavukcuoglu for their detailed feedback on this manuscript. We are indebted to Marcus Wainwright and Max Cant for generating the images in Figure 2. We would also like to thank Peter Dayan, Shane Legg, Ian Osband, Joel Veness, Tim Lillicrap, Theophane Weber, Remi Munos, Alvin Chua, Yori Zwols and many others at Google DeepMind for fruitful discussions.
# References
[1] Per Andersen, Richard Morris, David Amaral, Tim Bliss, and John OKeefe. The hippocampus book. Oxford University Press, 2006.
[2] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 47:253â279, 06 2013.
[3] Malcolm W Brown and John P Aggleton. Recognition memory: what are the roles of the perirhinal cortex and hippocampus? Nature Reviews Neuroscience, 2(1):51â61, 2001.
[4] Nicola S Clayton and Anthony Dickinson. Episodic-like memory during cache recovery by scrub jays. Nature, 395(6699):272â274, 1998. | 1606.04460#31 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 31 | A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. Meta-learning with memory-augmented neural networks. In International Conference on Machine Learning, 2016.
J. Schmidhuber. Evolutionary principles in self-referential learning; On learning how to learn: The meta-meta-... hook. PhD thesis, Institut f. Informatik, Tech. Univ. Munich, 1987.
J. Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1):131â139, 1992.
J. Schmidhuber. A neural network that embeds its own meta-levels. In International Conference on Neural Networks, pages 407â412. IEEE, 1993.
J. Schmidhuber, J. Zhao, and M. Wiering. Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. Machine Learning, 28(1):105â130, 1997.
N. N. Schraudolph. Local gain adaptation in stochastic gradient descent. In International Conference on Artiï¬cial Neural Networks, volume 2, pages 569â574, 1999. | 1606.04474#31 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 32 | 1https://github.com/moses-smt/
mosesdecoder/blob/master/scripts/generic/ multi-bleu.perl
we obtain the BLEU score of 36.3, which outper- forms Enc-Dec model by 4.8 BLEU points. This result is even better than the ensemble result of eight Enc-Dec models, which is 35.6 (Luong et al., 2015). This shows that, in addition to the convolutional lay- ers for computer vision, deep topologies can also work for LSTM layers. For Deep-Att, the perfor- mance is further improved to 37.7. We also list the previous state-of-the-art performance from a con- ventional SMT system (Durrani et al., 2014) with the BLEU of 37.0. This is the ï¬rst time that a single NMT model trained in an end-to-end form beats the best conventional system on this task. | 1606.04199#32 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 32 | [4] Nicola S Clayton and Anthony Dickinson. Episodic-like memory during cache recovery by scrub jays. Nature, 395(6699):272â274, 1998.
[5] Nathaniel D Daw, Yael Niv, and Peter Dayan. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature neuroscience, 8(12):1704â1711, 2005.
[6] Alexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to generate chairs with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1538â1546, 2015.
[7] David J Foster and Matthew A Wilson. Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature, 440(7084):680â683, 2006.
[8] Oliver Hardt, Karim Nader, and Lynn Nadel. Decay happens: the role of active forgetting in memory. Trends in cognitive sciences, 17(3):111â120, 2013.
[9] John J Hopï¬eld. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8):2554â2558, 1982. | 1606.04460#32 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 32 | R. S. Sutton. Adapting bias by gradient descent: An incremental version of delta-bar-delta. In Association for the Advancement of Artiï¬cial Intelligence, pages 171â176, 1992.
S. Thrun and L. Pratt. Learning to learn. Springer Science & Business Media, 1998. T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent
magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012.
P. Tseng. An incremental gradient (-projection) method with momentum term and adaptive stepsize rule. Journal on Optimization, 8(2):506â531, 1998.
D. H. Wolpert and W. G. Macready. No free lunch theorems for optimization. Transactions on Evolutionary Computation, 1(1):67â82, 1997.
9
A. S. Younger, P. R. Conwell, and N. E. Cotter. Fixed-weight on-line learning. Transactions on Neural Networks, 10(2):272â283, 1999.
A. S. Younger, S. Hochreiter, and P. R. Conwell. Meta-learning with backpropagation. In International Joint Conference on Neural Networks, 2001.
10 | 1606.04474#32 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 33 | We also show the results on the smaller data set with 12M sentence pairs and 30K vocabulary The two attention mod- in the second block. els, RNNsearch (Bahdanau et al., 2015) and RNNsearch-LV (Jean et al., 2015), achieve BLEU scores of 28.5 and 32.7 respectively. Note that RNNsearch-LV uses a large output vocabulary of 500K words based on the standard attention model RNNsearch. We obtain BLEU=35.9 which outper- forms its corresponding shallow model RNNsearch by 7.4 BLEU points. The SMT result from (Schwenk, 2014) is also listed and falls behind our model by 2.6 BLEU points.
Methods Enc-Dec (Luong,2015) SMT (Durrani,2014) Deep-ED (Ours) Deep-Att (Ours) RNNsearch (Bahdanau,2014) RNNsearch-LV (Jean,2015) SMT (Schwenk,2014) Deep-Att (Ours) Data Voc 36M 80K 36M Full 36M 80K 36M 80K 12M 30K 12M 500K 12M Full 12M 30K BLEU 31.5 37.0 36.3 37.7 28.5 32.7 33.3 35.9 | 1606.04199#33 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 33 | [10] William B Johnson and Joram Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemporary mathematics, 26(189-206):1, 1984.
[11] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi- supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pages 3581â3589, 2014.
[12] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
[13] Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. arXiv preprint arXiv:1604.00289, 2016.
[14] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998. | 1606.04460#33 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 33 | 10
4 16. â LSTM -2 6 â ADAM â SGD -8 â2! 0 64 128 0 64
Figure 10: Updates proposed by different optimizers at different optimization steps for two different coordinates.
# A Gradient preprocessing
One potential challenge in training optimizers is that different input coordinates (i.e. the gradients w.r.t. different optimizee parameters) can have very different magnitudes. This is indeed the case e.g. when the optimizee is a neural network and different parameters correspond to weights in different layers. This can make training an optimizer difï¬cult, because neural networks naturally disregard small variations in input signals and concentrate on bigger input values.
To this aim we propose to preprocess the optimizerâs inputs. One solution would be to give the optimizer (log(|â|), sgn(â)) as an input, where â is the gradient in the current timestep. This has a problem that log(|â|) diverges for â â 0. Therefore, we use the following preprocessing formula
âk â , sgn(â) p (â1, epâ) if |â| ⥠eâp otherwise
where p > 0 is a parameter controlling how small gradients are disregarded (we use p = 10 in all our experiments). | 1606.04474#33 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 34 | Table 1: English-to-French task: BLEU scores of single neural models. We also list the conventional SMT system for comparison.
Moreover, during the generation process, we ob- tained the best BLEU score with beam size = 3 (when the beam size is 2, there is only a 0.1 dif- ference in BLEU score). This is different from other works listed in Tab. 1, where the beam size is 12 (Jean et al., 2015; Sutskever et al., 2014). We at- tribute this difference to the improved model per- formance, where the ground truth generally exists in the top hypothesis. Consequently, with the much
smaller beam size, the generation efï¬ciency is sig- niï¬cantly improved. | 1606.04199#34 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 34 | [15] Joel Z. Leibo, Julien Cornebise, Sergio Gomez, and Demis Hassabis. Approximate hubel-wiesel modules and the data structures of neural computation. arxiv:1512.08457 [cs.NE], 2015.
[16] M. Lengyel and P. Dayan. Hippocampal contributions to control: The third way. In NIPS, volume 20, pages 889â896, 2007.
[17] David JC MacKay. Information theory, inference and learning algorithms. Cambridge university press, 2003.
9
[18] D Marr. Simple memory: A theory for archicortex. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, pages 23â81, 1971.
[19] James L McClelland and Nigel H Goddard. Considerations arising from a complementary learning systems perspective on hippocampus and neocortex. Hippocampus, 6(6):654â665, 1996.
[20] James L McClelland, Bruce L McNaughton, and Randall C OâReilly. Why there are comple- mentary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological review, 102(3):419, 1995. | 1606.04460#34 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 34 | where p > 0 is a parameter controlling how small gradients are disregarded (we use p = 10 in all our experiments).
We noticed that just rescaling all inputs by an appropriate constant instead also works ï¬ne, but the proposed preprocessing seems to be more robust and gives slightly better results on some problems.
# B Visualizations
Visualizing optimizers is inherently difï¬cult because their proposed updates are functions of the full optimization trajectory. In this section we try to peek into the decisions made by the LSTM optimizer, trained on the neural art task.
Histories of updates We select a single optimizee parameter (one color channel of one pixel in the styled image) and trace the updates proposed to this coordinate by the LSTM optimizer over a single trajectory of optimization. We also record the updates that would have been proposed by both SGD and ADAM if they followed the same trajectory of iterates. Figure 10 shows the trajectory of updates for two different optimizee parameters. From the plots it is clear that the trained optimizer makes bigger updates than SGD and ADAM. It is also visible that it uses some kind of momentum, but its updates are more noisy than those proposed by ADAM which may be interpreted as having a shorter time-scale momentum. | 1606.04474#34 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 35 | smaller beam size, the generation efï¬ciency is sig- niï¬cantly improved.
Next we list the effect of the novel F-F connec- tions in our Deep-Att model of shallow topology in Tab. 2. When ne = 1 and nd = 1, the BLEU scores are 31.2 without F-F and 32.3 with F-F. Note that the model without F-F is exactly the standard attention model (Bahdanau et al., 2015). Since there is only a single layer, the use of F-F connections means that at the interface part we include ft into the represen- tation (see Eq. 7). We ï¬nd F-F connections bring an improvement of 1.1 in BLEU. After we increase our model depth to ne = 2 and nd = 2, the improve- ment is enlarged to 1.4. When the model is trained with larger depth without F-F connections, we ï¬nd that the parameter exploding problem (Bengio et al., 1994) happens so frequently that we could not ï¬nish training. This suggests that F-F connections provide a fast way for gradient propagation. | 1606.04199#35 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 35 | [21] Bruce L McNaughton and Richard GM Morris. Hippocampal synaptic enhancement and information storage within a distributed memory system. Trends in neurosciences, 10(10):408â 415, 1987.
[22] Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lill- icrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. CoRR, abs/1602.01783, 2016.
[23] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[24] RGM Morris, P Garrud, and JNP Rawlinst. Place navigation impaired in rats with hippocampal lesions. Nature, 297:681, 1982.
[25] Vinod Nair and Geoffrey E Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 807â814, 2010. | 1606.04460#35 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 35 | Proposed update as a function of current gradient Another way to visualize the optimizer behavior is to look at the proposed update gt for a single coordinate as a function of the current gradient evaluation ât. We follow the same procedure as in the previous experiment, and visualize the proposed updates for a few selected time steps.
These results are shown in Figures 11â13. In these plots, the x-axis is the current value of the gradient for the chosen coordinate, and the y-axis shows the update that each optimizer would propose should the corresponding gradient value be observed. The history of gradient observations is the same for all methods and follows the trajectory of the LSTM optimizer.
11
128
The shape of this function for the LSTM optimizer is often step-like, which is also the case for ADAM. Surprisingly the step is sometimes in the opposite direction as for ADAM, i.e. the bigger the gradient, the bigger the update.
# C Neural Art
Shown below are additional examples of images styled using the LSTM optimizer. Each triple consists of the content image (left), style (right) and image generated by the LSTM optimizer (center). | 1606.04474#35 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 36 | F-F Models Deep-Att No Deep-Att Yes No Deep-Att Deep-Att Yes ne 1 1 2 2 nd 1 1 2 2 BLEU 31.2 32.3 33.3 34.7
Table 2: The effect of F-F. We list the BLEU scores of Deep-Att with and without F-F. Because of the param- eter exploding problem, we can not list the model per- formance of larger depth without F-F. For ne = 1 and nd = 1, F-F connections only contribute to the represen- tation at interface part (see Eq. 7).
Removing F-F connections also reduces the cor- responding model size. In order to ï¬gure out the effect of F-F comparing models with the same pa- rameter size, we increase the LSTM layer width of Deep-Att without F-F. In Tab. 3 we show that, after using a two times larger LSTM layer width of 1024, we can only obtain a BLEU score of 33.8, which is still worse than the corresponding Deep-Att with F-F. | 1606.04199#36 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 36 | [26] Kazu Nakazawa, Michael C Quirk, Raymond A Chitwood, Masahiko Watanabe, Mark F Yeckel, Linus D Sun, Akira Kato, Candice A Carr, Daniel Johnston, Matthew A Wilson, et al. Requirement for hippocampal ca3 nmda receptors in associative memory recall. Science, 297(5579):211â218, 2002.
[27] Kenneth A Norman and Randall C OâReilly. Modeling hippocampal and neocortical contri- butions to recognition memory: a complementary-learning-systems approach. Psychological review, 110(4):611, 2003.
[28] Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action- In Advances in Neural conditional video prediction using deep networks in atari games. Information Processing Systems, pages 2845â2853, 2015.
[29] A Ross Otto, Samuel J Gershman, Arthur B Markman, and Nathaniel D Daw. The curse of planning dissecting multiple reinforcement-learning systems by taxing the central executive. Psychological science, page 0956797612463080, 2013. | 1606.04460#36 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 36 | # D Information sharing between coordinates
In previous sections we considered a coordinatewise architecture, which corresponds by analogy to a learned version of RMSprop or ADAM. Although diagonal methods are quite effective in practice, we can also consider learning more sophisticated optimizers that take the correlations between coordinates into effect. To this end, we introduce a mechanism allowing different LSTMs to communicate with each other.
# Global averaging cells
The simplest solution is to designate a subset of the cells in each LSTM layer for communication. These cells operate like normal LSTM cells, but their outgoing activations are averaged at each step across all coordinates. These global averaging cells (GACs) are sufï¬cient to allow the networks to implement L2 gradient clipping [Bengio et al., 2013] assuming that each LSTM can compute the square of the gradient. This architecture is denoted as an LSTM+GAC optimizer.
# NTM-BFGS optimizer | 1606.04474#36 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 37 | We also notice that the interleaved bi-directional encoder starts to work when the encoder depth is larger than 1. The effect of the interleaved bi- directional encoder is shown in Tab. 4. For our largest model with ne = 9 and nd = 7, we compared the BLEU scores of the interleaved bi-directional encoder and the uni-directional encoder (where all LSTM layers work in forward direction). We ï¬nd
F-F Models No Deep-Att Deep-Att No Deep-Att Yes ne 2 2 2 nd width BLEU 33.3 2 33.8 2 34.7 2 512 1024 512
Table 3: BLEU scores with different LSTM layer width in Deep-Att. After using two times larger LSTM layer width of 1024, we can only obtain BLEU score of 33.8. It is still behind the corresponding Deep-Att with F-F.
there is a gap of about 1.5 points between these two encoders for both Deep-Att and Deep-ED.
Models Deep-Att Deep-Att Deep-ED Deep-ED Encoder Bi Uni Bi Uni ne 9 9 9 9 nd 7 7 7 7 BLEU 37.7 36.2 36.3 34.9 | 1606.04199#37 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 37 | [30] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of The 31st International Conference on Machine Learning, pages 1278â1286, 2014.
[31] Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. CoRR, abs/1511.05952, 2015.
[32] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driess- che, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mas- tering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
[33] Larry R Squire. Memory and the hippocampus: a synthesis from ï¬ndings with rats, monkeys, and humans. Psychological review, 99(2):195, 1992.
10
[34] Larry R Squire. Memory systems of the brain: a brief history and current perspective. Neurobi- ology of learning and memory, 82(3):171â177, 2004. | 1606.04460#37 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 37 | # NTM-BFGS optimizer
We also consider augmenting the LSTM+GAC architecture with an external memory that is shared between coordinates. Such a memory, if appropriately designed could allow the optimizer to learn algorithms similar to (low-memory) approximations to Newtonâs method, e.g. (L-)BFGS [see Nocedal and Wright, 2006]. The reason for this interpretation is that such methods can be seen as a set of independent processes working coordinatewise, but communicating through the inverse Hessian approximation stored in the memory. We designed a memory architecture that, in theory, allows the
12
iL i t [ Step 1 Step 2 Step 3 if f ( ( Step 4 Step 5 Step 6 Step 7 Step 8 BK h le l i] fan | Step 9 Step 10 Step 11 Step 12 { [ [ Step 13 Step 14 Step 15 Step 16 : [ [ Step 17 Step 18 Step 19 Step 20 1 a â= 10 Step 21 Step 22 Step 23 Step 24 19 , | 10 Step 25 Step 26 Step 27 Step 28 1 â -10390 0 400 400-0400 -400-0 400 -400 0 400 Step 29 Step 30 Step 31 Step 32
Figure 11: The proposed update direction for a single coordinate over 32 steps.
13 | 1606.04474#37 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 38 | Table 4: The effect of the interleaved bi-directional en- coder. We list the BLEU scores of our largest Deep-Att and Deep-ED models. The encoder term Bi denotes that the interleaved bi-directional encoder is used. Uni de- notes a model where all LSTM layers work in forward direction.
Next we look into the effect of model depth. In Tab. 5, starting from ne = 1 and nd = 1 and gradu- ally increasing the model depth, we signiï¬cantly in- crease BLEU scores. With ne = 9 and nd = 7, the best score for Deep-Att is 37.7. We tried to increase the LSTM width based on this, but obtained little improvement. As we stated in Sec.2, the complexity of the encoder and decoder, which is related to the model depth, is more important than the model size. We also tried a larger depth, but the results started to get worse. With our topology and training tech- nique, ne = 9 and nd = 7 is the best depth we can achieve. | 1606.04199#38 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 38 | [35] Robert J Sutherland and Jerry W Rudy. Conï¬gural association theory: The role of the hip- pocampal formation in learning, memory, and amnesia. Psychobiology, 17(2):129â144, 1989.
[36] Robert J Sutherland, Ian Q Whishaw, and Bob Kolb. A behavioural analysis of spatial localiza- tion following electrolytic, kainate-or colchicine-induced damage to the hippocampal formation in the rat. Behavioural brain research, 7(2):133â153, 1983.
[37] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 1998.
[38] Wendy L Suzuki and David G Amaral. Perirhinal and parahippocampal cortices of the macaque monkey: cortical afferents. Journal of comparative neurology, 350(4):497â533, 1994.
[39] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012.
[40] Alessandro Treves and Edmund T Rolls. Computational analysis of the role of the hippocampus in memory. Hippocampus, 4(3):374â391, 1994. | 1606.04460#38 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 38 | Figure 11: The proposed update direction for a single coordinate over 32 steps.
13
BK i] fan BK i] fan BK i] fan BK i] fan BK i] fan BK i] fan BK i] fan BK -1 ee Step 1 Step 2 Step 3 Step 4 ° : 0 Step 5 Step 6 Step 7 Step 8 ° | 0 Step 9 Step 10 Step 11 Step 12 eS | 0 Step 13 Step 14 Step 15 Step 16 =â â_ ââ 0 Step 17 Step 18 Step 19 Step 20 0 Step 21 Step 22 Step 23 Step 24 â SS 0 Step 25 Step 26 Step 27 Step 28 â700 0 400 -400 0 400 -400 0 400 -400 0 400 Step 29 Step 30 Step 31 Step 32
Figure 12: The proposed update direction for a single coordinate over 32 steps.
14
1 ° â -1 Step 1 Step 2 Step 3 Step 4 1 ; -1 Step 5 Step 6 Step 7 Step 8 1 ° = -10 Step 9 Step 10 Step 11 Step 12 1 -10 Step 13 Step 14 Step 15 Step 16 1 -10 Step 17 Step 18 Step 19 Step 20 1 -10 Step 21 Step 22 Step 23 Step 24 1 -10 Step 25 Step 26 Step 27 Step 28 1 0 -10 -400 0 400 -400 0 400 0 -400 0 400 Step 29 Step 30 Step 31 Step 32
Figure 13: The proposed update direction for a single coordinate over 32 steps.
15 | 1606.04474#38 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 39 | Models F-F Deep-Att Yes Deep-Att Yes Deep-Att Yes Deep-Att Yes Deep-Att Yes ne 1 2 5 9 9 nd 1 2 3 7 7 Col BLEU 32.3 2 34.7 2 36.0 2 37.7 2 36.6 1
Table 5: BLEU score of Deep-Att with different model depth. With ne = 1 and nd = 1, F-F connections only contribute to the representation at interface part where ft is included (see Eq. 7).
The last line in Tab. 5 shows the BLEU score of
36.6 of our deepest model, where only one encod- ing column (Col = 1) is used. We ï¬nd a 1.1 BLEU points degradation with a single encoding column. Note that the uni-directional models in Tab. 4 with In uni-direction still have two encoding columns. order to ï¬nd out whether this is caused by the de- creased parameter size, we test a wider model with It is 1024 memory blocks for the LSTM layers. shown in Tab. 6 that there is a minor improvement of only 0.1. We attribute this to the complementary in- formation provided by the double encoding column. | 1606.04199#39 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 39 | [40] Alessandro Treves and Edmund T Rolls. Computational analysis of the role of the hippocampus in memory. Hippocampus, 4(3):374â391, 1994.
[41] Endel Tulving, CA Hayman, and Carol A Macdonald. Long-lasting perceptual priming and semantic learning in amnesia: a case experiment. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17(4):595, 1991.
# A Variational autoencoders for representation learning
Variational autoencoders (VAE; [12, 30]) are latent-variable probabilistic models inspired by compres- sion theory. A VAE (shown in Figure 4) is composed of two artiï¬cial neural networks: the encoder, which takes observations and maps them into messages; and a decoder, that receives messages and approximately recovers the observations. VAEs are designed to minimise the cost of transmitting observations from the encoder to the decoder through the communication channel. In order to minimise the transmission cost, a VAE must learn to capture the statistics of the distribution of observations [e.g. 17]. For our representation learning purposes, we use the encoder network as our feature mapping, Ï. for several data sets, representations learned by a VAE encoder have been shown to capture the independent factors of variation in the underlying generative process of the data [11]. | 1606.04460#39 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 39 | Figure 13: The proposed update direction for a single coordinate over 32 steps.
15
Outer mm x = Product [7] ot 7 =
Figure 14: Left: NTM-BFGS read operation. Right: NTM-BFGS write operation.
network to simulate (L-)BFGS, motivated by the approximate Newton method BFGS, named for Broyden, Fletcher, Goldfarb, and Shanno. We call this architecture an NTM-BFGS optimizer, because its use of external memory is similar to the Neural Turing Machine [Graves et al., 2014]. The pivotal differences between our construction and the NTM are (1) our memory allows only low-rank updates; (2) the controller (including read/write heads) operates coordinatewise.
In BFGS an explicit estimate of the full (inverse) Hessian is built up from the sequence of observed gradients. We can write a skeletonized version of the BFGS algorithm, using Mt to represent the inverse Hessian approximation at iteration t, as follows
gt = read(Mt, θt) θt+1 = θt + gt Mt+1 = write(Mt, θt, gt) . | 1606.04474#39 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 40 | Models F-F Deep-Att Yes Deep-Att Yes Deep-Att Yes ne 9 9 9 nd 7 7 7 Col width BLEU 37.7 2 36.6 1 36.7 1 512 512 1024
Table 6: Comparison of encoders with different number of columns and LSTM layer width.
English-to-German: We also validate our deep The topology on the English-to-German task. English-to-German task is considered a relatively more difï¬cult task, because of the lower similarity between these two languages. Since the German vo- cabulary is much larger than the French vocabulary, we select 160K most frequent words as the target vo- cabulary. All the other hyper parameters are exactly the same as those in the English-to-French task.
We list our single model Deep-Att performance in Tab. 7. Our single model result with BLEU=20.6 is similar to the conventional SMT result of 20.7 (Buck et al., 2014). We also outperform the shallow at- tention models as shown in the ï¬rst two lines in Tab. 7. All the results are consistent with those in the English-to-French task. | 1606.04199#40 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 40 | In more detail, the encoder receives an observation, x, and outputs the parameter-values for a distribution of messages, q(z|x = x). The communication channel determines the cost of a message by a prior distribution over messages p(z). The decoder receives a message, z, drawn at random from q(z|x = x) and decodes it by outputting the parameters of a distribution over observations p(x|z = z). VAEs are trained to minimise cost of exactly recovering the original observation, given by the sum of expected communication cost KL (q(z|x) || p(z)) and expected correction cost E [p(x = x|z)]. In all our experiments, x â R7056 (84 by 84 gray-scale pixels, with range [0, 1]), and z â R32. We chose distributions q(z|x), p(z), and p(x|z) to be Gaussians with diagonal covariance matrices. In all experiments the encoder network has four convolutional [14] layers using {32, 32, 64, 64} kernels respectively, kernel sizes {4, 5, 5, 4}, kernel strides {2, 2, | 1606.04460#40 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 40 | gt = read(Mt, θt) θt+1 = θt + gt Mt+1 = write(Mt, θt, gt) .
Here we have packed up all of the details of the BFGS algorithm into the suggestively named read and write operations, which operate on the inverse Hessian approximation Mt. In BFGS these operations have speciï¬c forms, for example read(Mt, θt) = âMtâf (θt) is a speciï¬c matrix-vector multiplication and the BFGS write operation corresponds to a particular low-rank update of Mt.
In this work we preserve the structure of the BFGS updates, but discard their particular form. More speciï¬cally the read operation remains a matrix-vector multiplication but the form of the vector used is learned. Similarly, the write operation remains a low-rank update, but the vectors involved are also learned. Conveniently, this structure of interaction with a large dynamically updated state corresponds in a fairly direct way to the architecture of a Neural Turing Machine (NTM), where Mt corresponds to the NTM memory [Graves et al., 2014]. | 1606.04474#40 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 41 | Methods RNNsearch (Jean,2015) RNNsearch-LV (Jean,2015) SMT (Buck,2014) Deep-Att (Ours) Data Voc 4.5M 50K 4.5M 500K 4.5M Full 4.5M 160K BLEU 16.5 16.9 20.7 20.6
Table 7: English-to-German task: BLEU scores of single neural models. We also list the conventional SMT system for comparison.
# 4.4.2 Post processing
Two post processing techniques are used to im- prove the performance further on the English-to- French task.
First, three Deep-Att models are built for ensem- ble results. They are initialized with different ran- in addition, the training corpus dom parameters; for these models is shufï¬ed with different random seeds. We sum over the predicted probabilities of the target words and normalize the ï¬nal distribution to generate the next word. It is shown in Tab. 8 that the model ensemble can improve the performance further to 38.9. In Luong et al. (2015) and Jean et al. (2015) there are eight models for the best scores, but we only use three models and we do not obtain further gain from more models. | 1606.04199#41 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04460 | 41 | [14] layers using {32, 32, 64, 64} kernels respectively, kernel sizes {4, 5, 5, 4}, kernel strides {2, 2, 2, 2} , no padding, and ReLU [25] non-linearity. The convolutional layer are followed by a fully connected layer of 512 ReLU units, from which a linear layer outputs the means and log-standard-deviations of the approximate posterior q(z|x). The decoder is setup mirroring the encoder, with a fully connected layer of 512 ReLU units followed by four reverse convolutions [6] with {64, 64, 32, 32} kernels respectively, kernel sizes {4, 5, 5, 4}, kernel strides {2, 2, 2, 2}, no padding, followed by a reverse convolution with two output kernels âone for the mean and one for the log-standard-deviation of p(x|z). The standard deviation of each dimension in p(x|z) is not set to 0.05 if the value output by the network is smaller. The VAEs were trained to model a million observations obtained by executing a random policy on each environment. The parameters of the VAEs were optimised by running 400,000 steps of stochastic-gradient descent using | 1606.04460#41 | Model-Free Episodic Control | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains. | http://arxiv.org/pdf/1606.04460 | Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis | stat.ML, cs.LG, q-bio.NC | null | null | stat.ML | 20160614 | 20160614 | [
{
"id": "1512.08457"
},
{
"id": "1604.00289"
}
] |
1606.04474 | 41 | Our NTM-BFGS optimizer uses an LSTM+GAC as a controller; however, instead of producing the update directly we attach one or more read and write heads to the controller. Each read head produces a read vector rt which is combined with the memory to produce a read result it which is fed back into the controller at the following time step. Each write head produces two outputs, a left write vector at and a right write vector bt. The two write vectors are used to update the memory state by accumulating their outer product. The read and write operation for a single head is diagrammed in Figure 14 and the way read and write heads are attached to the controller is depicted in Figure 15.
In can be shown that NTM-BFGS with one read head and 3 write heads can simulate inverse Hessian BFGS assuming that the controller can compute arbitrary (coordinatewise) functions and have access to 2 GACs.
# NTM-L-BFGS optimizer
In cases where memory is constrained we can follow the example of L-BFGS and maintain a low rank approximation of the full memory (vis. inverse Hessian). The simplest way to do this is to store a sliding history of the left and right write vectors, allowing us to form the matrix vector multiplication required by the read operation efï¬ciently.
16
# v t
% | 1606.04474#41 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 42 | Methods Model Deep-ED Single Single Deep-Att Single+PosUnk Deep-Att Ensemble Deep-Att Ensemble+PosUnk Deep-Att Durrani, 2014 SMT Ensemble+PosUnk Enc-Dec Data Voc 36M 80K 36M 80K 36M 80K 36M 80K 36M 80K 36M Full 36M 80K BLEU 36.3 37.7 39.2 38.9 40.4 37.0 37.5
Table 8: BLEU scores of different models. The ï¬rst two blocks are our results of two single models and mod- els with post processing. In the last block we list two baselines of the best conventional SMT system and NMT system. | 1606.04199#42 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04474 | 42 | 16
# v t
%
Ot k k hy, > hy hl h vk rk ml k k i a! bk Meet 4
M,
Figure 15: Left: Interaction between the controller and the external memory in NTM-BFGS. The controller is composed of replicated coordinatewise LSTMs (possibly with GACs), but the read and write operations are global across all coordinates. Right: A single LSTM for the kth coordinate in the NTM-BFGS controller. Note that here we have dropped the time index t to simplify notation.
17 | 1606.04474#42 | Learning to learn by gradient descent by gradient descent | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art. | http://arxiv.org/pdf/1606.04474 | Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas | cs.NE, cs.LG | null | null | cs.NE | 20160614 | 20161130 | [] |
1606.04199 | 43 | Second, we recover the unknown words in the generated sequences with the Positional Unknown (PosUnk) model introduced in (Luong et al., 2015). The full parallel corpus is used to obtain the word mappings (Liang et al., 2006). We ï¬nd this method provides an additional 1.5 BLEU points, which is consistent with the conclusion in Luong et al. (2015). We obtain the new BLEU score of 39.2 with a single Deep-Att model. For the ensemble models of Deep-Att, the BLEU score rises to 40.4. In the last two lines, we list the conventional SMT model (Durrani et al., 2014) and the previous best neural models based system Enc-Dec (Luong et al., 2015) for comparison. We ï¬nd our best score outperforms the previous best score by nearly 3 points.
# 4.5 Analysis
# 4.5.1 Length | 1606.04199#43 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04199 | 44 | # 4.5 Analysis
# 4.5.1 Length
On the English-to-French task, we analyze the effect of the source sentence length on our mod- els as shown in Fig. 3. Here we show ï¬ve curves: our Deep-Att single model, our Deep-Att ensemble model, our Deep-ED model, a previously proposed Enc-Dec model with four layers (Sutskever et al., 2014) and an SMT model (Durrani et al., 2014). We ï¬nd our Deep-Att model works better than the
40) fi âSâ Deep-Att single model i â+âDeep-Att ensemble 3 models 10 â+â Deep-ED single model = + ~EneâDec 4 layers (Sutskever, 2014) = © <SMT (Durrani, 2014) 4 7 8 2 7 2 8 35 79 Sentences by Length
Figure 3: BLEU scores vs. source sequence length. Five lines are our Deep-Att single model, Deep-Att ensem- ble model, our Deep-ED model, previous Enc-Dec model with four layers and SMT model. | 1606.04199#44 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04199 | 45 | previous two models (Enc-Dec and SMT) on nearly all sentence lengths. It is also shown that for very long sequences with length over 70 words, the per- formance of our Deep-Att does not degrade, when compared to another NMT model Enc-Dec. Our Deep-ED also has much better performance than the shallow Enc-Dec model on nearly all lengths, al- though for long sequences it degrades and starts to fall behind Deep-Att.
# 4.5.2 Unknown words
Next we look into the detail of the effect of un- known words on the English-to-French task. We select the subset without unknown words on target sentences from the original test set. There are 1705 such sentences (56.8%). We compute the BLEU scores on this subset and the results are shown in Tab. 9. We also list the results from SMT model (Durrani et al., 2014) as a comparison.
We ï¬nd that the BLEU score of Deep-Att on this subset rises to 40.3, which has a gap of 2.6 with | 1606.04199#45 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04199 | 46 | We ï¬nd that the BLEU score of Deep-Att on this subset rises to 40.3, which has a gap of 2.6 with
Model Deep-Att Ensemble SMT(Durrani) Deep-Att Ensemble SMT(Durrani) Test set Ratio(%) BLEU 37.7 100.0 Full 38.9 100.0 Full 37.0 100.0 Full 40.3 56.8 Subset 41.4 56.8 Subset 37.5 56.8 Subset
Table 9: BLEU scores of the subset of the test set without considering unknown words.
0.30 028 A. ca Test 0.38 â*ân.=9 n,=7 , âân.=5n,=3 | 0.40 âs-nsinget 039 036 034 032 0.30 Train
Figure 4: Token error rate on train set vs. test set. Square: Deep-Att (ne = 9, nd = 7). Circle: Deep-Att (ne = 5, nd = 3). Triagle: Deep-Att (ne = 1, nd = 1). | 1606.04199#46 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04199 | 47 | the score 37.7 on the full test set. On this sub- set, the SMT model achieves 37.5, which is simi- lar to its score 37.0 on the full test set. This sug- gests that the difï¬culty on this subset is not much different from that on the full set. We therefore at- tribute the larger gap for Deep-att to the existence of unknown words. We also compute the BLEU score on the subset of the ensemble model and ob- tain 41.4. As a reference related to human perfor- mance, in Sutskever et al. (2014), it has been tested that the BLEU score of oracle re-scoring the LIUM 1000-best results (Schwenk, 2014) is 45.
# 4.5.3 Over-ï¬tting
Deep models have more parameters, and thus have a stronger ability to ï¬t the large data set. However, our experimental results suggest that deep models are less prone to the problem of over-ï¬tting. In Fig. 4, we show three results from models with a different depth on the English-to-French task. These three models are evaluated by token error rate, which is deï¬ned as the ratio of incorrectly predicted | 1606.04199#47 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04199 | 48 | words in the whole target sequence with correct his- torical input. The curve with square marks corre- sponds to Deep-Att with ne = 9 and nd = 7. The curve with circle marks corresponds to ne = 5 and nd = 3. The curve with triangle marks corresponds to ne = 1 and nd = 1. We ï¬nd that the deep model has better performance on the test set when the token error rate is the same as that of the shallow models on the training set. This shows that, with decreased token error rate, the deep model is more advanta- geous in avoiding the over-ï¬tting phenomenon. We only plot the early training stage curves because, during the late training stage, the curves are not smooth.
# 5 Conclusion
With the introduction of fast-forward connections to the deep LSTM network, we build a fast path with neither non-linear transformations nor recur- rent computation to propagate the gradients from the top to the deep bottom. On this path, gradients de- cay much slower compared to the standard deep net- work. This enables us to build the deep topology of NMT models. | 1606.04199#48 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04199 | 49 | We trained NMT models with depth of 16 in- cluding 25 LSTM layers and evaluated them mainly on the WMTâ14 English-to-French translation task. This is the deepest topology that has been in- vestigated in the NMT area on this task. We showed that our Deep-Att exhibits 6.2 BLEU points improvement over the previous best single model, achieving a 37.7 BLEU score. This single end-to- end NMT model outperforms the best conventional SMT system (Durrani et al., 2014) and achieves a state-of-the-art performance. After utilizing un- known word processing and model ensemble of three models, we obtained a BLEU score of 40.4, an improvement of 2.9 BLEU points over the pre- vious best result. When evaluated on the subset of the test corpus without unknown words, our model achieves 41.4. Our model is also validated on the more difï¬cult English-to-German task.
Our model is also efï¬cient in sequence genera- tion. The best results from both a single model and model ensemble are obtained with a beam size of 3, much smaller than previous NMT systems where beam size is about 12 (Jean et al., 2015; Sutskever | 1606.04199#49 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04199 | 50 | et al., 2014). From our analysis, we ï¬nd that deep models are more advantageous for learning for long sequences and that the deep topology is resistant to the over-ï¬tting problem.
We tried deeper models and did not obtain further improvements with our current topology and train- ing techniques. However, the depth of 16 is not very deep compared to the models in computer vi- sion (He et al., 2016). We believe we can beneï¬t from deeper models, with new designs of topologies and training techniques, which remain as our future work.
# References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of In- ternational Conference on Learning Representations. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradi- ent descent is difï¬cult. IEEE Transactions on Neural Networks, 5(2):157â166.
Yoshua Bengio, 2012. Practical Recommendations for Gradient-Based Training of Deep Architectures, pages 437â478. Springer Berlin Heidelberg, Berlin, Heidel- berg. | 1606.04199#50 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04199 | 51 | Christian Buck, Kenneth Heaï¬eld, and Bas van Ooyen. 2014. N-gram counts and language models from the common crawl. In Proceedings of the Language Re- sources and Evaluation Conference.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Ben- gio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine transla- tion. In Proceedings of the Empiricial Methods in Nat- ural Language Processing.
Nadir Durrani, Barry Haddow, Philipp Koehn, and Ken- neth Heaï¬eld. 2014. Edinburghâs phrase-based ma- In Proceed- chine translation systems for WMT-14. ings of the Ninth Workshop on Statistical Machine Translation.
Mikel L. Forcada and Ram´on P. ËNeco. 1997. Recur- In sive hetero-associative memories for translation. Biological and Artiï¬cial Computation: From Neuro- science to Technology, Berlin, Heidelberg. Springer Berlin Heidelberg. | 1606.04199#51 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04199 | 52 | Alex Graves, Marcus Liwicki, Santiago Fernandez, Ro- man Bertolami, Horst Bunke, and J¨urgen Schmid- huber. 2009. A novel connectionist system for un- IEEE Transac- constrained handwriting recognition.
tions on Pattern Analysis and Machine Intelligence, 31(5):855â868.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In IEEE Conference on Computer Vision and Pattern Recognition.
Karl Moritz Hermann, Tom´aËs KoËcisk´y, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems.
Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Im- proving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â 1780. | 1606.04199#52 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04199 | 53 | Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â 1780.
S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vo- cabulary for neural machine translation. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the Empirical Methods in Natural Language Processing. Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. 2016. Grid long short-term memory. In Proceedings of International Conference on Learning Representa- tions.
Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of International Conference on Learning Representa- tions. | 1606.04199#53 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04199 | 54 | Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of International Conference on Learning Representa- tions.
P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrase-based translation. In Proceedings of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology. Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to Proceedings of the IEEE, document recognition. 86(11):2278â2324.
Percy Liang, Ben Taskar, and Dan Klein. 2006. Align- In Proceedings of the North ment by agreement. American Chapter of the Association of Computa- tional Linguistics on Human Language Technology. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Pro- ceedings of the 53rd Annual Meeting of the Associa- tion for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Language Pro- cessing. | 1606.04199#54 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04199 | 55 | Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan L. Yuille. 2015. Deep captioning with multimodal recurrent neural networks (m-RNN). In Proceedings of International Conference on Learn- ing Representations.
Holger Schwenk. 2014. http://www-lium.univ- aclemans.fr/â¼schwenk/cslm joint paper [online; cessed 03-september-2014]. University Le Mans. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. In Proceed- ings of the 32nd International Conference on Machine Learning, Deep Learning Workshop. | 1606.04199#55 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.04199 | 56 | Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Se- quence to sequence learning with neural networks. In Advances in Neural Information Processing Systems. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Ser- manet, Scott Reed, Dragomir Anguelov, Dumitru Er- han, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In IEEE Con- ference on Computer Vision and Pattern Recognition. Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running aver- age of its recent magnitude. COURSERA: Neural Net- works for Machine Learning, 4.
Oriol Vinyals and Quoc Le. 2015. A neural conver- In Proceedings of the 32nd Interna- sational model. tional Conference on Machine Learning, Deep Learn- ing Workshop.
Kaisheng Yao, Trevor Cohn, Katerina Vylomova, Kevin Duh, and Chris Dyer. 2015. Depth-gated LSTM. arXiv:1508.03790.
Yang Yu, Wei Zhang, Chung-Wei Hang, Bing Xiang, and Bowen Zhou. 2015. Empirical study on deep learning models for QA. arXiv:1510.07526. | 1606.04199#56 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task. | http://arxiv.org/pdf/1606.04199 | Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu | cs.CL, cs.LG | TACL 2016 | null | cs.CL | 20160614 | 20160723 | [
{
"id": "1508.03790"
},
{
"id": "1510.07526"
}
] |
1606.03622 | 0 | 6 1 0 2 n u J 1 1 ] L C . s c [
1 v 2 2 6 3 0 . 6 0 6 1 : v i X r a
# Data Recombination for Neural Semantic Parsing
# Robin Jia Computer Science Department Stanford University [email protected]
Percy Liang Computer Science Department Stanford University [email protected]
# Abstract
Modeling crisp logical regularities is cru- cial in semantic parsing, making it difï¬cult for neural models with no task-speciï¬c prior knowledge to achieve good results. In this paper, we introduce data recom- bination, a novel framework for inject- ing such prior knowledge into a model. From the training data, we induce a high- precision synchronous context-free gram- mar, which captures important conditional independence properties commonly found in semantic parsing. We then train a sequence-to-sequence recurrent network (RNN) model with a novel attention-based copying mechanism on datapoints sam- pled from this grammar, thereby teaching the model about these structural proper- ties. Data recombination improves the ac- curacy of our RNN model on three se- mantic parsing datasets, leading to new state-of-the-art performance on the stan- dard GeoQuery dataset for models with comparable supervision.
Original Examples what are the major cities in utah ? what states border maine ? | 1606.03622#0 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 1 | Original Examples what are the major cities in utah ? what states border maine ?
# y Induce Grammar Synchronous CFG y Sample New Examples
# Recombinant Examples
what are the major cities in [states border [maine]] ? what are the major cities in [states border [utah]] ? what states border [states border [maine] ] ? what states border [states border [utah]] ?
# y Train Model Sequence-to-sequence RNN
Figure 1: An overview of our system. Given a dataset, we induce a high-precision synchronous context-free grammar. We then sample from this grammar to generate new ârecombinantâ exam- ples, which we use to train a sequence-to-sequence RNN.
# Introduction | 1606.03622#1 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 2 | # Introduction
Semantic parsingâthe precise translation of nat- ural language utterances into logical formsâhas including question answer- many applications, ing (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Liang et al., 2011; Berant et al., 2013), instruc- tion following (Artzi and Zettlemoyer, 2013b), and regular expression generation (Kushman and Barzilay, 2013). Modern semantic parsers (Artzi and Zettlemoyer, 2013a; Berant et al., 2013) are complex pieces of software, requiring hand- crafted features, lexicons, and grammars.
have made swift inroads into many structured pre- diction tasks in NLP, including machine trans- lation (Sutskever et al., 2014; Bahdanau et al., 2014) and syntactic parsing (Vinyals et al., 2015b; Dyer et al., 2015). Because RNNs make very few domain-speciï¬c assumptions, they have the poten- tial to succeed at a wide variety of tasks with min- imal feature engineering. However, this ï¬exibil- ity also puts RNNs at a disadvantage compared to standard semantic parsers, which can generalize naturally by leveraging their built-in awareness of logical compositionality. | 1606.03622#2 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 3 | Meanwhile, recurrent neural networks (RNNs)
In this paper, we introduce data recombina- tion, a generic framework for declaratively injectGEO x: âwhat is the population of iowa ?â y: _answer ( NV , ( _population ( NV , V1 ) , _const ( V0 , _stateid ( iowa ) ) ) ) ATIS x: âcan you list all ï¬ights from chicago to milwaukeeâ y: ( _lambda $0 e ( _and ( _flight $0 ) ( _from $0 chicago : ( _to $0 milwaukee : _ci ) _ci ) ) ) Overnight x: âwhen is the weekly standupâ y: ( call listValue ( call getProperty meeting.weekly_standup ( string start_time ) ) )
Figure 2: One example from each of our domains. We tokenize logical forms as shown, thereby cast- ing semantic parsing as a sequence-to-sequence task. | 1606.03622#3 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 4 | Figure 2: One example from each of our domains. We tokenize logical forms as shown, thereby cast- ing semantic parsing as a sequence-to-sequence task.
ing prior knowledge into a domain-general struc- tured prediction model. In data recombination, prior knowledge about a task is used to build a high-precision generative model that expands the empirical distribution by allowing fragments of different examples to be combined in particular ways. Samples from this generative model are then used to train a domain-general model. In the case of semantic parsing, we construct a genera- tive model by inducing a synchronous context-free grammar (SCFG), creating new examples such as those shown in Figure 1; our domain-general model is a sequence-to-sequence RNN with a novel attention-based copying mechanism. Data recombination boosts the accuracy of our RNN model on three semantic parsing datasets. On the GEO dataset, data recombination improves test ac- curacy by 4.3 percentage points over our baseline RNN, leading to new state-of-the-art results for models that do not use a seed lexicon for predi- cates.
# 2 Problem statement | 1606.03622#4 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 5 | # 2 Problem statement
We cast semantic parsing as a sequence-to- sequence task. The input utterance x is a sequence of words x1, . . . , xm â V (in), the input vocabulary; similarly, the output logical form y is a sequence of tokens y1, . . . , yn â V (out), the output vocab- ulary. A linear sequence of tokens might appear to lose the hierarchical structure of a logical form, but there is precedent for this choice: Vinyals et al.
(2015b) showed that an RNN can reliably predict tree-structured outputs in a linear fashion.
We evaluate our system on three existing se- mantic parsing datasets. Figure 2 shows sample input-output pairs from each of these datasets.
⢠GeoQuery (GEO) contains natural language questions about US geography paired with corresponding Prolog database queries. We use the standard split of 600 training exam- ples and 280 test examples introduced by Zettlemoyer and Collins (2005). We prepro- cess the logical forms to De Brujin index no- tation to standardize variable naming.
language queries for a ï¬ights database paired with corresponding database queries written in lambda calculus. We train on 4473 examples and evaluate on the 448 test examples used by Zettlemoyer and Collins (2007). | 1606.03622#5 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 6 | ⢠Overnight (OVERNIGHT) contains logical forms paired with natural language para- phrases across eight varied subdomains. Wang et al. (2015) constructed the dataset by generating all possible logical forms up to some depth threshold, then getting multiple natural language paraphrases for each logi- cal form from workers on Amazon Mechan- ical Turk. We evaluate on the same train/test splits as Wang et al. (2015).
In this paper, we only explore learning from log- ical forms. In the last few years, there has an emergence of semantic parsers learned from de- notations (Clarke et al., 2010; Liang et al., 2011; Berant et al., 2013; Artzi and Zettlemoyer, 2013b). While our system cannot directly learn from deno- tations, it could be used to rerank candidate deriva- tions generated by one of these other systems.
# 3 Sequence-to-sequence RNN Model
Our sequence-to-sequence RNN model is based on existing attention-based neural machine trans- lation models (Bahdanau et al., 2014; Luong et al., 2015a), but also includes a novel attention-based copying mechanism. Similar copying mechanisms have been explored in parallel by Gu et al. (2016) and Gulcehre et al. (2016).
# 3.1 Basic Model | 1606.03622#6 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 7 | # 3.1 Basic Model
Encoder. The encoder converts the input se- quence x1, . . . , xm into a sequence of contextsensitive embeddings b1, . . . , bm using a bidirec- tional RNN (Bahdanau et al., 2014). First, a word embedding function Ï(in) maps each word xi to a ï¬xed-dimensional vector. These vectors are fed as input to two RNNs: a forward RNN and a back- ward RNN. The forward RNN starts with an initial hidden state hF 0, and generates a sequence of hid- den states hF m by repeatedly applying the recurrence
i = LSTM(Ï(in)(xi), hF hF iâ1). (1)
The recurrence takes the form of an LSTM (Hochreiter and Schmidhuber, 1997). The back- ward RNN similarly generates hidden states hB m, . . . , hB 1 by processing the input sequence in reverse order. Finally, for each input position i, we deï¬ne the context-sensitive embedding bi to be the concatenation of hF i and hB i | 1606.03622#7 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 8 | Decoder. The decoder is an attention-based model (Bahdanau et al., 2014; Luong et al., 2015a) that generates the output sequence y1, . . . , yn one token at a time. At each time step j, it writes yj based on the current hidden state sj, then up- dates the hidden state to sj+1 based on sj and yj. Formally, the decoder is deï¬ned by the following equations:
m, hB 1 ]). (2)
# s.= tanh(W) [AF ej = 3) WO);
(3)
an = exp(eji) (4) â ry exp(ej;")
= S- arjidj. (5) i=l
P (yj = w | x, y1:jâ1) â exp(Uw[sj, cj]). sj+1 = LSTM([Ï(out)(yj), cj], sj). | 1606.03622#8 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 9 | When not speciï¬ed, i ranges over {1, . . . , m} and j ranges over {1, . . . , n}. Intuitively, the αjiâs de- ï¬ne a probability distribution over the input words, describing what words in the input the decoder is focusing on at time j. They are computed from the unnormalized attention scores eji. The matri- ces W (s), W (a), and U , as well as the embedding function Ï(out), are parameters of the model.
# 3.2 Attention-based Copying
In the basic model of the previous section, the next output word yj is chosen via a simple softmax over all words in the output vocabulary. However, this
©)
©)
model has difï¬culty generalizing to the long tail of entity names commonly found in semantic parsing datasets. Conveniently, entity names in the input often correspond directly to tokens in the output (e.g., âiowaâ becomes iowa in Figure 2).1 | 1606.03622#9 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 10 | To capture this intuition, we introduce a new attention-based copying mechanism. At each time step j, the decoder generates one of two types of actions. As before, it can write any word in the output vocabulary. In addition, it can copy any in- put word xi directly to the output, where the prob- ability with which we copy xi is determined by the attention score on xi. Formally, we deï¬ne a latent action aj that is either Write[w] for some w â V (out) or Copy[i] for some i â {1, . . . , m}. We then have
P (aj = Write[w] | x, y1:jâ1) â exp(Uw[sj, cj]), (8)
P (aj = Copy[i] | x, y1:jâ1) â exp(eji). (9) | 1606.03622#10 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 11 | P (aj = Copy[i] | x, y1:jâ1) â exp(eji). (9)
The decoder chooses aj with a softmax over all these possible actions; yj is then a deterministic function of aj and x. During training, we maxi- mize the log-likelihood of y, marginalizing out a. Attention-based copying can be seen as a com- bination of a standard softmax output layer of an attention-based model (Bahdanau et al., 2014) and a Pointer Network (Vinyals et al., 2015a); in a Pointer Network, the only way to generate output is to copy a symbol from the input.
# 4 Data Recombination
# 4.1 Motivation
The main contribution of this paper is a novel data recombination framework that injects important prior knowledge into our oblivious sequence-to- sequence RNN. In this framework, we induce a high-precision generative model from the training data, then sample from it to generate new training examples. The process of inducing this generative model can leverage any available prior knowledge, which is transmitted through the generated exam- ples to the RNN model. A key advantage of our two-stage approach is that it allows us to declare desired properties of the task which might be hard to capture in the model architecture. | 1606.03622#11 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 13 | Examples (âwhat states border texas ?â, Rules created by ABSENTITIES Root -> ( âwhat states border STATEID ?â, answer (NV, STATEID - ( âtexasâ, texas ) Root -> ( âwhat is the highest mountain in STATEID ?â, answer (NV, highest (V0, (mountain(VO), STATEID â- (âohioâ, ohio) Rules created by ABSWHOLEPHRASES Root -> ( âwhat is the highest mountain in STATE ?â, answer (NV, highest (VO, Rules created by CONCAT-2 Root -> (SENT: </s> SENT2, SENT: </s> SENT2) SENT â ( âwhat states border texas ?â, answer (NV, SENT â ( âwhat is the highest mountain in ohio ?â, answer (NV, highest (VO, answer (NV, (state(V0), next_to(V0, NV), const(V0, stateid(texas))))) (âwhat is the highest mountain in ohio ?â, answer (NV, highest (V0, (mountain(V0), loc(V0O, NV), const(V0O, stateid(ohio)))))) | 1606.03622#13 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 14 | answer (NV, highest (V0, (mountain(V0), loc(V0O, NV), const(V0O, stateid(ohio)))))) (state(VO), next_to(V0, NV), const(V0, stateid(STATEID ))))) loc(vo, const (V0, stateid(STATEID )))))) Root -> ( âwhat states border STATE ?â, answer (NV, STATE â> ( âstates border texasâ, state(V0), next_to(V0, NV), const(V0, stateid(texas))) (state(VO), next_to(V0, NV), STATE) )) (mountain(V0), loc(Vv0, NV), STATE)))) (state(V0O), next_to(V0O, NV), const(V0O, stateid(texas)))) ) (mountain(V0O), loc(V0, NV), const (VO, stateid(ohio))))) ) NV), | 1606.03622#14 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 15 | Figure 3: Various grammar induction strategies illustrated on GEO. Each strategy converts the rules of an input grammar into rules of an output grammar. This figure shows the base case where the input grammar has rules ROOT -> (x,y) for each (, y) pair in the training dataset.
Our approach generalizes data augmentation, which is commonly employed to inject prior knowledge into a model. Data augmenta- tion techniques focus on modeling invariancesâ transformations like translating an image or adding noise that alter the inputs x, but do not change the output y. These techniques have proven effective in areas like computer vision (Krizhevsky et al., 2012) and speech recognition (Jaitly and Hinton, 2013). | 1606.03622#15 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 16 | In semantic parsing, however, we would like to capture more than just invariance properties. Con- sider an example with the utterance âwhat states border texas ?â. Given this example, it should be easy to generalize to questions where âtexasâ is replaced by the name of any other state: simply replace the mention of Texas in the logical form with the name of the new state. Underlying this phenomenon is a strong conditional independence principle: the meaning of the rest of the sentence is independent of the name of the state in ques- tion. Standard data augmentation is not sufï¬cient to model such phenomena: instead of holding y ï¬xed, we would like to apply simultaneous trans- formations to x and y such that the new x still maps to the new y. Data recombination addresses
this need.
# 4.2 General Setting | 1606.03622#16 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 17 | this need.
# 4.2 General Setting
In the general setting of data recombination, we start with a training set D of (x, y) pairs, which deï¬nes the empirical distribution Ëp(x, y). We then ï¬t a generative model Ëp(x, y) to Ëp which gener- alizes beyond the support of Ëp, for example by splicing together fragments of different examples. We refer to examples in the support of Ëp as re- combinant examples. Finally, to train our actual model pθ(y | x), we maximize the expected value of log pθ(y | x), where (x, y) is drawn from Ëp.
# 4.3 SCFGs for Semantic Parsing | 1606.03622#17 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 18 | # 4.3 SCFGs for Semantic Parsing
For semantic parsing, we induce a synchronous context-free grammar (SCFG) to serve as the backbone of our generative model p. An SCFG consists of a set of production rules X â> (a, 8), where X is a category (non-terminal), and a and 3 are sequences of terminal and non-terminal sym- bols. Any non-terminal symbols in a must be aligned to the same non-terminal symbol in 3, and vice versa. Therefore, an SCFG defines a set of joint derivations of aligned pairs of strings. In our case, we use an SCFG to represent joint derivations of utterances x and logical forms y (which for us is just a sequence of tokens). After we induce an SCFG G from D, the corresponding generative model Ëp(x, y) is the distribution over pairs (x, y) deï¬ned by sampling from G, where we choose production rules to apply uniformly at random. | 1606.03622#18 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 19 | It is instructive to compare our SCFG-based data recombination with WASP (Wong and Mooney, 2006; Wong and Mooney, 2007), which uses an SCFG as the actual semantic parsing model. The grammar induced by WASP must have good coverage in order to generalize to new in- puts at test time. WASP also requires the imple- mentation of an efï¬cient algorithm for computing the conditional probability p(y | x). In contrast, our SCFG is only used to convey prior knowl- edge about conditional independence structure, so it only needs to have high precision; our RNN model is responsible for boosting recall over the entire input space. We also only need to forward sample from the SCFG, which is considerably eas- ier to implement than conditional inference.
Below, we examine various strategies for induc- ing a grammar G from a dataset D. We first en- code D as an initial grammar with rules ROOT â (x,y) for each (x,y) ⬠D. Next, we will define each grammar induction strategy as a map- ping from an input grammar Gin to a new gram- mar Gout. This formulation allows us to compose grammar induction strategies (Section|4.3.4).
# 4.3.1 Abstracting Entities | 1606.03622#19 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 20 | # 4.3.1 Abstracting Entities
Our first grammar induction strategy, ABSENTI- TIES, simply abstracts entities with their types. We assume that each entity e (e.g., texas) has a corresponding type e.t (e.g., state), which we infer based on the presence of certain predicates in the logical form (e.g. stateid). For each grammar rule X -> (a, 8) in Gin, where a con- tains a token (e.g., âfexasâââ) that string matches an entity (e.g., texas) in £2, we add two rules to Gout: (i) a rule where both occurrences are re- placed with the type of the entity (e.g., state), and (ii) a new rule that maps the type to the en- tity (e.g., STATEID + (âtexasâ, texas); we re- serve the category name STATE for the next sec- tion). Thus, Gout generates recombinant examples that fuse most of one example with an entity found in a second example. A concrete example from the GEO domain is given in Figure[3|
# 4.3.2 Abstracting Whole Phrases | 1606.03622#20 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 21 | # 4.3.2 Abstracting Whole Phrases
Our second grammar induction strategy, ABS W- HOLEPHRASES, abstracts both entities and whole phrases with their types. For each grammar rule X -» (a, 8) in Gin, we add up to two rules to Gout. First, if a contains tokens that string match 0 an entity in 3, we replace both occurrences with the type of the entity, similarly to rule (i) from AB- SENTITIES. Second, if we can infer that the entire expression ( evaluates to a set of a particular type (e.g. state) we create a rule that maps the type 0 (a, 8). In practice, we also use some simple tules to strip question identifiers from a, so that the resulting examples are more natural. Again, refer to Figure[3]for a concrete example.
This strategy works because of a more general conditional independence property: the meaning of any semantically coherent phrase is condition- ally independent of the rest of the sentence, the cornerstone of compositional semantics. Note that this assumption is not always correct in general: for example, phenomena like anaphora that in- volve long-range context dependence violate this assumption. However, this property holds in most existing semantic parsing datasets.
# 4.3.3 Concatenation | 1606.03622#21 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 22 | # 4.3.3 Concatenation
The final grammar induction strategy is a surpris- ingly simple approach we tried that turns out to work. For any k > 2, we define the CONCAT-k strategy, which creates two types of rules. First, we create a single rule that has ROOT going to a sequence of k SENTâs. Then, for each root- level rule ROOT â (a, 3) in Gin, we add the rule SENT -> (a, 3) to Gow. See Figure [3] for an ex- ample.
and ABSWHOLE- PHRASES, concatenation is very general, and can be applied to any sequence transduction problem. Of course, it also does not introduce additional information about compositionality or indepen- dence properties present in semantic parsing. However, it does generate harder examples for the attention-based RNN, since the model must learn to attend to the correct parts of the now-longer input sequence. Related work has shown that training a model on more difï¬cult examples can improve generalization, the most canonical case being dropout (Hinton et al., 2012; Wager et al., 2013). | 1606.03622#22 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 23 | function TRAIN(dataset D, number of epochs T, number of examples to sample n) Induce grammar G from D Initialize RNN parameters 6 randomly for each iteration t = 1,...,T do Compute current learning rate 7 Initialize current dataset D; to D fori =1,...,ndo Sample new example (zâ, yâ) from G Add (2â, yâ) to Dy end for Shuffle Dz for each example (x, y) in D; do 0 â 04+ mV log poly | x) end for end for end function
Figure 4: The training procedure with data recom- bination. We ï¬rst induce an SCFG, then sample new recombinant examples from it at each epoch.
4.3.4 Composition We note that grammar induction strategies can be composed, yielding more complex grammars. Given any two grammar induction strategies f1 and f2, the composition f1 ⦠f2 is the grammar induction strategy that takes in Gin and returns f1(f2(Gin)). For the strategies we have deï¬ned, we can perform this operation symbolically on the grammar rules, without having to sample from the intermediate grammar f2(Gin).
# 5 Experiments | 1606.03622#23 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 24 | # 5 Experiments
We evaluate our system on three domains: GEO, ATIS, and OVERNIGHT. For ATIS, we report logical form exact match accuracy. For GEO and OVERNIGHT, we determine correctness based on denotation match, as in Liang et al. (2011) and Wang et al. (2015), respectively.
# 5.1 Choice of Grammar Induction Strategy
We note that not all grammar induction strate- In particular, gies make sense for all domains. we only apply ABSWHOLEPHRASES to GEO and OVERNIGHT. We do not apply ABSWHOLE- PHRASES to ATIS, as the dataset has little nesting structure.
# Implementation Details
We tokenize logical forms in a domain-speciï¬c manner, based on the syntax of the formal lan- guage being used. On GEO and ATIS, we dis- allow copying of predicate names to ensure a fair
comparison to previous work, as string matching between input words and predicate names is not commonly used. We prevent copying by prepend- ing underscores to predicate tokens; see Figure 2 for examples. | 1606.03622#24 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 25 | comparison to previous work, as string matching between input words and predicate names is not commonly used. We prevent copying by prepend- ing underscores to predicate tokens; see Figure 2 for examples.
On ATIS alone, when doing attention-based copying and data recombination, we leverage an external lexicon that maps natural language phrases (e.g., âkennedy airportâ) to entities (e.g., jfk:ap). When we copy a word that is part of a phrase in the lexicon, we write the entity asso- ciated with that lexicon entry. When performing data recombination, we identify entity alignments based on matching phrases and entities from the lexicon. | 1606.03622#25 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 26 | We run all experiments with 200 hidden units and 100-dimensional word vectors. We initial- ize all parameters uniformly at random within the interval [â0.1, 0.1]. We maximize the log- form using likelihood of stochastic gradient descent. We train the model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs, starting after epoch 15. We replace word vectors for words that occur only once in the training set with a universal <unk> word vector. Our model is implemented in Theano (Bergstra et al., 2010). When performing data recombination, we sam- ple a new round of recombinant examples from our grammar at each epoch. We add these ex- amples to the original training dataset, randomly shufï¬e all examples, and train the model for the epoch. Figure 4 gives pseudocode for this training procedure. One important hyperparameter is how many examples to sample at each epoch: we found that a good rule of thumb is to sample as many re- combinant examples as there are examples in the training dataset, so that half of the examples the model sees at each epoch are recombinant. | 1606.03622#26 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 27 | At test time, we use beam search with beam size 5. We automatically balance missing right paren- theses by adding them at the end. On GEO and OVERNIGHT, we then pick the highest-scoring logical form that does not yield an executor error when the corresponding denotation is computed. On ATIS, we just pick the top prediction on the beam.
# Impact of the Copying Mechanism
First, we measure the contribution of the attention- based copying mechanism to the modelâs overall
No Copying With Copying GEO ATIS OVERNIGHT 74.6 85.0 69.9 76.3 76.7 75.8
Table 1: Test accuracy on GEO, ATIS, and OVERNIGHT, both with and without copying. On OVERNIGHT, we average across all eight domains. | 1606.03622#27 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 28 | Table 1: Test accuracy on GEO, ATIS, and OVERNIGHT, both with and without copying. On OVERNIGHT, we average across all eight domains.
GEO ATIS Previous Work Zettlemoyer and Collins (2007) Kwiatkowski et al. (2010) Liang et al. (2011)2 Kwiatkowski et al. (2011) Poon (2013) Zhao and Huang (2015) Our Model No Recombination ABSENTITIES ABSWHOLEPHRASES CONCAT-2 CONCAT-3 AWP + AE AE + C2 AWP + AE + C2 AE + C3 88.9 91.1 88.6 88.9 85.0 85.4 87.5 84.6 88.9 89.3 84.6 82.8 83.5 84.2 76.3 79.9 79.0 77.5 78.8 83.3
Table 2: Test accuracy using different data recom- bination strategies on GEO and ATIS. AE is AB- SENTITIES, AWP is ABSWHOLEPHRASES, C2 is CONCAT-2, and C3 is CONCAT-3.
performance. On each task, we train and evalu- ate two models: one with the copying mechanism, and one without. Training is done without data re- combination. The results are shown in Table 1. | 1606.03622#28 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.