doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1606.04671
15
Figure 3: Illustration of different baselines and architectures. Baseline 1 is a single column trained on the target task; baseline 2 is a single column, pretrained on a source task and finetuned on the target task (output layer only); baseline 3 is the same as baseline 2 but the whole model is finetuned; and baseline 4 is a 2 column progressive architecture, with previous column(s) initialized randomly and frozen. # 5.2 Pong Soup The first evaluation domain is a set of synthetic variants of the Atari game of Pong ("Pong Soup") where the visuals and gameplay have been altered, thus providing a setting where we can be confident that there are transferable aspects of the tasks. The variants are Noisy (frozen Gaussian noise is added to the inputs); Black (black background); White (white background); Zoom (input is scaled by 75% and translated); V-flip, H-flip, and VH-flip (input is horizontally and/or vertically flipped). Example frames are shown in Fig. 2. The results of training two columns on the Pong variants, including all relevant baselines are shown in Figure 4. Transfer scores are summarized over all target tasks in Table 1.
1606.04671#15
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04648
16
Traditional models include: QL: Query likelihood model based on Dirichlet smoothing[11] is one of the best language models. BM25: Based on BM25 formula [8], is another highly effective retrieval model. Representation-based deep matching models include: DSSM: DSSM model [3] uses fully connected layers to en- code query and document into two fix-length vectors, then uses cosine similarity to compute the matching score. Since DSSM needs large scale training data due to its huge pa- rameter size, we directly used the released model1 (trained on large click-through dataset) in our experiments. CDSSM: CDSSM model [9] is similar with DSSM, but use convolutional layers to encode query and document. For the same reason as DSSM, we also made use of the released model directly. ARC-I: ARC-I model [2] uses convolutional layers to en- code two texts, and uses fully connected layers to aggregate matching score. Interaction-based deep matching models include: ARC-II: ARC-II [2] constructs local interactions by adding up word embeddings in a small context window, then makes use of convolutional layers to extract features from the in- teractions.
1606.04648#16
A Study of MatchPyramid Models on Ad-hoc Retrieval
Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models.
http://arxiv.org/pdf/1606.04648
Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng
cs.IR
Neu-IR '16 SIGIR Workshop on Neural Information Retrieval
null
cs.IR
20160615
20160615
[]
1606.04648
17
up word embeddings in a small context window, then makes use of convolutional layers to extract features from the in- teractions. Table 5: Comparison of different retrieval mod- † models els over the TREC collection Robust04. trained on large click-through dataset. Type Name Traditional QL Model BM25 Representation DSSM† Based Model MAP 0.253 0.255 0.095 0.067 0.041 0.067 0.232 nDCG@20 P@20 0.369 0.415 0.370 0.418 0.171 0.201 0.125 0.146 0.065 0.066 0.128 0.147 0.327 0.411 CDSSM† ARC-I ARC-II MP-Gau Interaction Based Model The experimental results (see Table 5) show that Match- Pyramid outperforms all the representation-based deep match- ing models. The major reason is that it can retain all the low-level matching signals which are important for ad-hoc retrieval. For ARC-II, using 1D convolution to generate in- tercations signals in a small context window seems not very 1http://research.microsoft.com/en-us/downloads /731572aa-98e4-4c50-b99d-ae3f0c9562b9/
1606.04648#17
A Study of MatchPyramid Models on Ad-hoc Retrieval
Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models.
http://arxiv.org/pdf/1606.04648
Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng
cs.IR
Neu-IR '16 SIGIR Workshop on Neural Information Retrieval
null
cs.IR
20160615
20160615
[]
1606.04671
17
5 We can make several observations from these results. Baseline 2 (single column, only output layer is finetuned; see Fig. 3) fails to learn the target task in most experiments and thus has negative transfer. This approach is quite standard in supervised learning settings, where features from ImageNet-trained nets are routinely repurposed for new domains. As expected, we observe high positive transfer with baseline 3 (single column, full finetuning), a well established paradigm for transfer. Progressive networks outperform this baseline however in terms of both median and mean score, with the difference being more pronounced for the latter. As the mean is more sensitive to outliers, this suggests that progressive networks are better able to exploit transfer when transfer is possible (i.e. when source and target domains are compatible). Fig. 4 (b) lends weight to this hypothesis, where progressive networks are shown to significantly outperform the baselines for particular game pairs. Progressive nets also compare favourably to baseline 4, confirming that progressive nets are indeed taking advantage of the features learned in previous columns. # Detailed analysis (a) pong h-flip pong zoom pong noisy noisy pong (b) (c) pong heflip a z on? | | < zoom noisy ov i = oo = a <wapaws i Insensitive sensitive
1606.04671#17
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04648
18
effective for matching. However, we find that the best per- forming deep matching model, MP-Gau, still cannot com- pete with traditional retrieval models. The results indicate that the ad-hoc retrieval task may be quire different from other text matching tasks, such as paraphrase identification and question answering. We need some further studies on the differences between these tasks for designing better deep matching models. # 4. CONCLUSIONS In this paper, we apply MatchPyramid model to ad-hoc retrieval task and discuss the impact of different kernel sizes, pooling sizes and similarity functions. We find that pooling by paragraph length in document, a good similarity function which can differentiate exact matching signals from semantic matching signals, and a relative small kernel size are helpful for the ad-hoc retrieval task. Experiments show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models. These results encourage us to seek deeper under- standing of the text matching task in ad-hoc retrieval and propose better models accordingly.
1606.04648#18
A Study of MatchPyramid Models on Ad-hoc Retrieval
Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models.
http://arxiv.org/pdf/1606.04648
Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng
cs.IR
Neu-IR '16 SIGIR Workshop on Neural Information Retrieval
null
cs.IR
20160615
20160615
[]
1606.04671
18
Figure 5: (a) Transfer analysis for 2-column nets on Pong variants. The relative sensitivity of the network’s outputs on the columns within each layer (the AFS) is indicated by the darkness of shading. (b) AFS values for the 8 feature maps of conv. 1 of a 1-column Pong net. Only one feature map is effectively used by the net; the same map is also used by the 2-column versions. Below: spatial filter components (red = positive, blue = negative). (c) Activation maps of the filter in (b) from example states of the four games.
1606.04671#18
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04648
19
5. REFERENCES [1] R. C. S. L. L. Giles. Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping. In Advances in Neural Information Processing Systems 13: Proceedings of the 2000 Conference, volume 13, page 402. MIT Press, 2001. [2] B. Hu, Z. Lu, H. Li, and Q. Chen. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems, pages 2042–2050, 2014. L. Heck. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, pages 2333–2338. ACM, 2013. [4] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [5] Z. Lu and H. Li. A deep architecture for matching short texts. In Advances in Neural Information Processing Systems, pages 1367–1375, 2013. [6] L. Pang, Y. Lan, J. Guo, J. Xu, S. Wan, and X. Cheng. Text matching as image recognition. CoRR, abs/1602.06359, 2016.
1606.04648#19
A Study of MatchPyramid Models on Ad-hoc Retrieval
Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models.
http://arxiv.org/pdf/1606.04648
Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng
cs.IR
Neu-IR '16 SIGIR Workshop on Neural Information Retrieval
null
cs.IR
20160615
20160615
[]
1606.04671
19
We use the metric derived in Sec. 3 to analyse what features are being transferred between Pong variants. We see that when switching from Pong to H-Flip, the network reuses the same components of low and mid-level vision (the outputs of the two convolutional layers; Figure 5a). However, the fully connected layer must be largely re-learned, as the policy relevant features of the task (the relative locations/velocities of the paddle and ball) are now in a new location. When switching from Pong to Zoom, on the other hand, low-level vision is reused for the new task, but new mid-level vision features are learned. Interestingly, only one low-level feature appears to be reused: (see Fig. 5b): this is a spatio-temporal filter with a considerable temporal DC component. This appears sufficient for detecting both ball motion and paddle position in the original, flipped, and zoomed Pongs.
1606.04671#19
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04648
20
X. Cheng. Text matching as image recognition. CoRR, abs/1602.06359, 2016. [7] X. Qiu and X. Huang. Convolutional neural tensor network architecture for community-based question answering. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), pages 1305–1311, 2015. approximations to the 2-poisson model for probabilistic weighted retrieval. In Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, pages 232–241. Springer-Verlag New York, Inc., 1994. [9] Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil. A latent semantic model with convolutional-pooling structure for information retrieval. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pages 101–110. ACM, 2014. [10] R. Socher, E. H. Huang, J. Pennin, C. D. Manning, and A. Y. Ng. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, pages 801–809, 2011.
1606.04648#20
A Study of MatchPyramid Models on Ad-hoc Retrieval
Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models.
http://arxiv.org/pdf/1606.04648
Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng
cs.IR
Neu-IR '16 SIGIR Workshop on Neural Information Retrieval
null
cs.IR
20160615
20160615
[]
1606.04671
20
Finally, when switching from Pong to Noisy, some new low-level vision is relearned. This is likely because the first layer filter learned on the clean task is not sufficiently tolerant to the added noise. In contrast, this problem does not apply when moving from Noisy to Pong (Figure 5a, rightmost column), where all of vision transfers to the new task. # 5.3 Atari Games We next investigate feature transfer between randomly selected Atari games [3]. This is an interesting question, because the visuals of Atari games are quite different from each other, as are the controls and required strategy. Though games like Pong and Breakout are conceptually similar (both involve hitting a ball with a paddle), Pong is vertically aligned while Breakout is horizontal: a potentially insurmountable feature-level difference. Other Atari game pairs have no discernible overlap, even at a conceptual level. To this end we start by training single columns on three source games (Pong, River Raid, and Seaquest) 3 and assess if the learned features transfer to a different subset of randomly selected target games (Alien, Asterix, Boxing, Centipede, Gopher, Hero, James Bond, Krull, Robotank, Road Runner, Star Gunner, and Wizard of Wor). We evaluate progressive networks with 2, 3 and 4 columns,
1606.04671#20
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
21
3Progressive columns having more than one “source” column are trained sequentially on these source games, i.e. Seaquest-River Raid-Pong means column 1 is first trained on Seaquest, column 2 is added afterwards and trained on River Raid, and then column 3 added and trained on Pong. 6 100 (b) target: boxing @ Base3 centipede boxing score Baseline 2 Baseline 3 ong) riverraid seaquest Baseline & Random Prog. 2 col pong Bil riverraid seaquest Prog. 3 col Tiverraid+-pong riverraid+-seaquest pong+riverraid| pong+seaquest | a 6000 target: gopher mm score seaquesteriverraid seaquest+pong fil Prog. 4 col_seaquestt riverraidrpong steps ae? Figure 6: Transfer scores and example learning curves for Atari target games, as per Figure 4. Atari Mean (%) Median (%) Mean (%) Median (%) Mean (%) Median (%) Pong Soup Labyrinth Baseline 1 Baseline 2 Baseline 3 Baseline 4 Progressive 2 col Progressive 3 col Progressive 4 col 100 35 181 134 209 222 — 100 7 160 131 169 183 — 100 41 133 96 132 140 141 100 21 110 95 112 111 116 100 88 235 185 491 — — 100 85 112 108 115 — —
1606.04671#21
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
22
Table 1: Transfer percentages in three domains. Baselines are defined in Fig. 3. comparing to the baselines of Figure 3). The transfer matrix and selected transfer curves are shown in Figure 6, and the results summarized in Table 1. Across all games, we observe from Fig. 6, that progressive nets result in positive transfer in 8 out of 12 target tasks, with only two cases of negative transfer. This compares favourably to baseline 3, which yields positive transfer in only 5 of 12 games. This trend is reflected in Table 1, where progressive networks convincingly outperform baseline 3 when using additional columns. This is especially promising as we show in the Appendix that progressive network use a diminishing amount of capacity with each added column, pointing a clear path to online compression or pruning as a means to mitigate the growth in model size. Now consider the specific sequence Seaquest-to-Gopher, an example of two dissimilar games. Here, the pretrain/finetune paradigm (baseline 3) exhibits negative transfer, unlike progressive networks (see Fig.6b, bottom), perhaps because they are more able to ignore the irrelevant features. For the sequence Seaquest[+River Raid][+Pong]-to-Boxing, using additional columns in the progressive networks can yield a significant increase in transfer (see Fig. 6b, top).
1606.04671#22
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
23
# Detailed Analysis Figure 6 demonstrates that both positive and negative transfer is possible with progressive nets. To differentiate these cases, we consider the Average Fisher Sensitivity for the 3 column case (e.g., see Fig. 7a). A clear pattern emerges amongst these and other examples: the most negative transfer coincides with complete dependence on the convolutional layers of the previous columns, and no learning of new visual features in the new column. In contrast, the most positive transfer occurs when the features of the first two columns are augmented by new features. The statistics across all 3-column nets (Figure 7b) show that positive transfer in Atari occurs at a "sweet spot" between heavy reliance on features from the source task, and heavy reliance on all new features for the target task. At first glance, this result appears unintuitive: if a progressive net finds a valuable feature set from a source task, shouldn’t we expect a high degree of transfer? We offer two hypotheses. First, this may simply reflect an optimization difficulty, where the source features offer fast convergence to a poor local minimum. This is a known challenge in transfer learning [20]: learned source tasks confer an inductive bias that can either help or hinder in different cases. Second, this may reflect a problem of 7 # pega Proga # Prog2
1606.04671#23
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
24
7 # pega Proga # Prog2 é ey (a) é & > ¢ > e¢ (b) & §& & €& € & gs Fs e ¢ § 3% 2 t = = = = 2 5 0 a 2 ows i i = = o 5 i i ' oo i il = = = fig 7 ont = = = . oo 6OAFS oy oo 1 lowest transfer score | highest transfer score AFS( conv1-3, 3) Figure 7: (a) AFS scores for 3-column nets with lowest (left) and highest (right) transfer scores on the 12 target Atari games. (b) Transfer statistics across 72 three-column nets, as a function of the mean AFS across the three convolutional layers of the new column (i.e. how much new vision is learned). exploration, where the transfered representation is "good enough" for a functional, but sub-optimal policy. # 5.4 Labyrinth
1606.04671#24
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
25
exploration, where the transfered representation is "good enough" for a functional, but sub-optimal policy. # 5.4 Labyrinth The final experimental setting for progressive networks is Labyrinth, a 3D maze environment where the inputs are rendered images granting partial observability and the agent outputs discrete actions, including looking up, down, left, or right and moving forward, backwards, left, or right. The tasks as well as the level maps are diverse and involve getting positive scores for ‘eating’ good items (apples, strawberries) and negative scores for eating bad items (mushrooms, lemons). Details can be found in the appendix. While there is conceptual and visual overlap between the different tasks, the tasks present a challenging set of diverse game elements (Figure 2). (a) Baseline? Avoid 1 (b) Track 1 to Track 2 Avoid 1 to Avoid 2 Track 2 Track 4 as as Track 1 Maze Y Baseline 3 Avoid Track 1 Maze Y score eat Avoid Track 1 Maze Y steps steps Figure 8: Transfer scores and example learning curves for Labyrinth tasks. Colours indicate transfer (clipped at 2). The learning curves show two examples of two-column progressive performance vs. baselines 1 and 3.
1606.04671#25
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
26
As in the other domains, the progressive approach yields more positive transfer than any of the baselines (see Fig. 8a and Table 1). We observe less transfer on the Seek Track levels, which have dense reward items throughout the maze and are easily learned. Note that even for these easy cases, baseline 2 shows negative transfer because it cannot learn new low-level visual features, which are important because the reward items change from task to task. The learning curves in Fig. 8b exemplify the typical results seen in this domain: on simpler games, such as Track 1 and 2, learning is rapid and stable by all agents. On more difficult games, with more complex game structure, the baselines struggle and progressive nets have an advantage. # 6 Conclusion
1606.04671#26
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
27
# 6 Conclusion Continual learning, the ability to accumulate and transfer knowledge to new domains, is a core characteristic of intelligent beings. Progressive neural networks are a stepping stone towards continual learning, and this work has demonstrated their potential through experiments and analysis across three RL domains, including Atari, which contains orthogonal or even adversarial tasks. We believe that we are the first to show positive transfer in deep RL agents within a continual learning framework. Moreover, we have shown that the progressive approach is able to effectively exploit transfer for compatible source and task domains; that the approach is robust to harmful features learned in incompatible tasks; and that positive transfer increases with the number of columns, thus corroborating the constructive, rather than destructive, nature of the progressive architecture. 8 References [1] Forest Agostinelli, Michael R Anderson, and Honglak Lee. Adaptive multi-column deep neural networks with application to robust image denoising. In Advances in Neural Information Processing Systems, 2013. [2] Shun-ichi Amari. Natural gradient works efficiently in learning. Neural Computation, 1998. [3] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research (JAIR), 47:253–279, 2013.
1606.04671#27
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
28
[4] Yoshua Bengio. Deep learning of representations for unsupervised and transfer learning. Workshop on Unsupervised and Transfer Learning, 2012. In JMLR: [5] Dan C. Ciresan, Ueli Meier, and Jürgen Schmidhuber. Multi-column deep neural networks for image classification. In Conf. on Computer Vision and Pattern Recognition, 2012. [6] Scott E. Fahlman and Christian Lebiere. The cascade-correlation learning architecture. In Advances in Neural Information Processing Systems, 1990. [7] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, July 2006. [8] Goeff Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. CoRR, abs/1503.02531, 2015. [9] Yann LeCun, John S. Denker, and Sara A. Solla. Optimal brain damage. In Advances in Neural Information Processing Systems, 1990. [10] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In Proc. of Int’l Conference on Learning Representations (ICLR), 2013.
1606.04671#28
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
29
[10] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In Proc. of Int’l Conference on Learning Representations (ICLR), 2013. [11] G. Mesnil, Y. Dauphin, X. Glorot, S. Rifai, Y. Bengio, I. Goodfellow, E. Lavoie, X. Muller, G. Desjardins, D. Warde-Farley, P. Vincent, A. Courville, and J. Bergstra. Unsupervised and transfer learning challenge: a deep learning approach. In JMLR W& CP: Proc. of the Unsupervised and Transfer Learning challenge and workshop, volume 27, 2012. [12] V. Mnih, Kk Kavukcuoglu, D. Silver, A. Rusu, J. Veness, M. Bellemare, A. Graves, M. Riedmiller, A. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
1606.04671#29
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
30
[13] Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Int’l Conf. on Machine Learning (ICML), 2016. [14] Emilio Parisotto, Lei Jimmy Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforcement learning. In Proc. of Int’l Conference on Learning Representations (ICLR), 2016. [15] Mark B. Ring. Continual Learning in Reinforcement Environments. R. Oldenbourg Verlag, 1995. [16] Artem Rozantsev, Mathieu Salzmann, and Pascal Fua. Beyond sharing weights for deep domain adaptation. CoRR, abs/1603.06432, 2016. [17] A. Rusu, S. Colmenarejo, Ç. Gülçehre, G. Desjardins, J. Kirkpatrick, R. Pascanu, V. Mnih, K. Kavukcuoglu, and R. Hadsell. Policy distillation. abs/1511.06295, 2016.
1606.04671#30
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
31
[18] Paul Ruvolo and Eric Eaton. Ella: An efficient lifelong learning algorithm. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), June 2013. [19] Daniel L. Silver, Qiang Yang, and Lianghao Li. Lifelong machine learning systems: Beyond learning algorithms. In AAAI Spring Symposium: Lifelong Machine Learning, 2013. [20] Matthew E. Taylor and Peter Stone. An introduction to inter-task transfer for reinforcement learning. AI Magazine, 32(1):15–34, 2011. [21] Alexander V. Terekhov, Guglielmo Montone, and J. Kevin O’Regan. Knowledge Transfer in Deep Block-Modular Neural Networks, pages 268–279. Springer International Publishing, Cham, 2015. [22] C. Tessler, S. Givony, T. Zahavy, D. J. Mankowitz, and S. Mannor. A Deep Hierarchical Approach to Lifelong Learning in Minecraft. ArXiv e-prints, 2016. [23] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, pages 3320–3328, 2014.
1606.04671#31
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
32
[24] Guanyu Zhou, Kihyuk Sohn, and Honglak Lee. Online incremental feature learning with denoising autoencoders. In Proc. of Int’l Conf. on Artificial Intelligence and Statistics (AISTATS), pages 1453–1461, 2012. 9 # Supplementary Material # A Perturbation Analysis We explored two related methods for analysing transfer in progressive networks. One based on Fisher information yields the Average Fisher Sensitivity (AFS) and is described in Section 3 of the paper. We describe the second method based on perturbation analysis in this appendix, as it proved too slow to use at scale. Given its intuitive appeal however, we provide details of the method along with results on Pong Variants (see Section 5.2), as a means to corroborate the AFS score.
1606.04671#32
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
33
Our perturbation analysis aims to estimate which components of the source columns materially contribute to the performance of the final column on the target tasks. To this end, we injected Gaussian noise into each of the (post-ReLU) hidden representations, with a new sample on every forward pass, and calculated the average effect of these perturbations on the game score over 10 episodes. We did this at a coarse scale, by adding noise across all features of a given layer, though a fine scale analysis is also possible per feature (map). In order to be invariant to any arbitrary scale factors in the network weights, we scale the noise variance proportional to the variance of the activations in each feature map and fully-connected neuron. Scaling the variance in this manner is analogous to computing the Fisher w.r.t. normalized activations for the AFS score. ) pong b-flip pong zoom pong noisy noisy pong Be a = = (a) 2 E g om | = = 5 oom i = le = . g 0 APS 4 eo a” 5 insensitive sensitive “ © 2 * | = = 2 10810 Choise = conv2 i | | | 5 5 “coo = = = 0 AFS 1 oa Insensitive sensitive
1606.04671#33
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
34
Figure 9: (a) Perturbation analysis for the two second-layer convolutional representations in the two columns of the Pong/Pong-noise net. Blue: adding noise to second convolutional layer from column 1; green: from column 2. Grey line determines critical noise magnitude for each representation, σ2 i . (b-c) Comparison of per-layer sensitivities obtained using the APS method (b) and the AFS method (c; as per main text). These are highly similar. Define Λ(k) in performance. The Average Perturbation Sensitivity (APS) for this layer is simply: i = 1/σ2(k) as the precision of the noise injected at layer i of column k, which results in a 50% drop i ; Aw? APS(i, k) = 5 Aw (3) Note that this value is normalized across columns for a given layer. The APS score can thus be interpreted as the responsibility of each column in a given layer to final performance. The APS score of 2-column progressive networks trained on Pong Variants is shown in Fig9 (b). These clearly corroborate the AFS shown in (c). # B Compressibility of Progressive Networks
1606.04671#34
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
35
# B Compressibility of Progressive Networks As described in the main text, one of the limitations of progressive networks is the growth in the size of the network with added tasks. In the basic approach we pursue in the main text, the number of hidden units and feature maps grows linearly with the number of columns, and the number of parameters grows quadratically. Here, we sought to determine the degree to which this full capacity is actually used by the network. We leveraged the Average Fisher Sensitivity measure to study how increasing the number of columns in the Atari task set changes the need for additional resources. In Figure 10a, we measure the average fractional use of existing 1 feature maps in a given layer (here, layer 2). We do this for each network by concatenating the per-feature-map AFS values from all source columns in this layer, sorting the values to produce a spectrum, and then averaging across networks. We find that as the number of columns increases, the average spectrum becomes sparser: the network relies on a smaller proportion of features from the source columns. Similar results were found for all layers.
1606.04671#35
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
36
Similarly, in Figure 10b, we measure the capacity required in the final added column as a function of the total number of columns. Again, we measure the spectrum of AFS values in an example layer, but here from only the final column. As the progressive network grows, the new column’s features are both less important overall (indicated by the declining area under the graph), and have a sparser AFS spectrum. Combined, these results suggest that significant pruning of lateral connections is possible, and the quadratic growth of parameters might be contained. @) layer 2: source columns () layer 2: final column 0.4 0.4 ~ — 2col e — lool = — 3col 2B — 2col a — 4ocol iol — 3col ia) x — 4col gy R x & x 0 - 0 1 Naps 1 12 map index map index (from all source columns) (from final column)
1606.04671#36
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
37
Figure 10: (a) Spectra of AFS values (for layer 2) across all feature maps from source columns, for the Atari dataset. The spectra show the range of AFS values, and are averaged across networks. While the 2 column / 3 column / 4 column nets all have different values of Nmaps (here, 12, 24, and 36 respectively), these have been dilated to fit the same axis to show the proportional use of these maps. (b) Spectra of AFS values (for layer 2) for the feature maps from only the final column. # C Setup Details In our grid we sample hyper-parameters from categorical distributions: Learning rate was sampled from {10−3, 5 · 10−4, 10−4}. • Strength of the entropy regularization from {10−2, 10−3, 10−4} • Gradient clipping cut-off from {20, 40} • scalar multiplier on the lateral feature is initialized randomly to one from {1, 10−1, 10−2}
1606.04671#37
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
38
For the Atari experiments we used a model with 3 convolutional layers followed by a fully connected layer and from which we predict the policy and value function. The convolutional layers are as follows. All have 12 feature maps. The first convolutional layer has a kernel of size 8x8 and a stride of 4x4. The second layer has a kernel of size 4 and a stride of 2. The last convolutional layer has size 3x4 with a stride of 1. The fully connected layer has 256 hidden units. Learning follows closely the paradigm described in [13]. We use 16 workers and the same RMSProp algorithm without momentum or centring of the variance. The score for each point of a training curve is the average over all the episodes the model gets to finish in 25e4 environment steps. The whole experiments are run for a maximum of 1.6e8 environment step. The agent has an action repeat of 4 as in [13], which means that for 4 consecutive steps the agent will use the same action picked at the beginning of the series. For this reason through out the paper we actually report results in terms of agent perceived steps rather than environment steps. That is, the maximal number of agent perceived step that we do for any particular run is 4e7. # D Learning curves
1606.04671#38
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
39
# D Learning curves Figure 11 shows training curves for all the target games in the Atari domain. We plot learning curves for two column, three column and four column progressive networks alongside Baseline 3 (gray dashed line), a model pretrained on Seaquest and then finetuned on the particular target game and Baseline 1 (gray dotted line), where a single column is trained on the source game Seaquest. We can see that overall baseline 3 performs well. However there are situations when having features learned from more previous task actually helps with transfer (e.g. when target game is Boxing). 2 Target game: asterix Target game: boxing
1606.04671#39
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
40
2 Target game: asterix Target game: boxing Target game: alien steps le? 0.0 4.0 steps le7 Target game: jamesbond 2500 2000 1500 score 1000 500 0.0 i steps le7 30 25 220 5 315 1o 0.0 4.0 steps le7 10000 8000 6000 score 4000 6000 5000 @ 4000 5 % 3000 2000 1000 score 40000 35000 30000 @ 25000 8 20000 ” 15000 10000 5000 0.0 Target game: asterix steps steps Target game: krull steps Target game: star gunner steps 4.0 le7 Target game: boxing score steps Target game: hero 30000 25000 g 20000 5 % 15000 10000 5000 0.0 steps Target game: road runner 8.0 steps Target game: wizard of wor 7000, 000 5000 8 4000 % 3000 2000 1000 oo 40 steps re —Eoquest rverraid = Prog. 3 col — seaquest+riverraid+-pong - Prog. aaseline 1 ++ aseline 2 — seaquest- Prog. 2 col Figure 11: Training curves for transferring to the target games after seeing first Seaquest followed by River Raid and lastly Pong. For the baselines, the source game used for pretraining is Seaquest.
1606.04671#40
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
41
Figure 12 shows how two-column progressive networks perform as compared to Baseline 3 (gray dashed line), a model pretrained on the source game, here standard Pong, and then finetuned on a particular target game, and Baseline 1 (black dotted line), where a single column is trained on standard Pong. Figure 13 shows two-column progressive networks and baselines on Labyrinth tasks; the source game was Maze Y. # E Labyrinth Section 5.4 evaluates progressive networks on foraging tasks in complex 3D maze environments. Positive rewards are given to the agent for collecting apples and strawberries, and negative rewards for mushrooms and lemons. Episodes terminate when either all (positive) rewards are collected, or after a fixed time interval. Levels differ in their maze layout, the type of items present and the sparsity of the reward structure. The levels we employed can be characterized as follows:
1606.04671#41
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
42
Levels differ in their maze layout, the type of items present and the sparsity of the reward structure. The levels we employed can be characterized as follows: Seek Track 1: simple corridor with many apples • Seek Track 2: U-shaped corridor with many strawberries • Seek Track 3: Ω-shaped, with 90o turns, with few apples • Seek Track 4: Ω-shaped, with 45o turns, with few apples • Seek Avoid 1: large square room with apples and lemons • Seek Avoid 2: large square room with apples and mushrooms • Seek Maze M : M-shaped maze, with apples at dead-ends • Seek Maze Y : Y-shaped maze, with apples at dead-ends 3 # le7 4.0 le7 4.0 le7 # 4 col Target: Pong Target: Black steps # score # score steps Target: H-flip Target: HV-flip
1606.04671#42
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04671
43
# score # score # steps # steps # Target: Noisy # Target: V-flip steps # score # score steps # Target: White # Target: Zoom # score # score # steps 0.0 # so # Baseline 1 = = # Baseline 3 === # 4.0 Prog. 2 col. Figure 12: Training curves for transferring to 8 target games after learning standard Pong first. 4 Target: Track 1 Target: Track 2 steps # score # score steps Target: Track 3 Target: Track 4 ot NWR UY ON @ 0 steps 4.0 # score # score 0 4.0 vies Baseline 1 = = Baseline 3 == Prog. 2 col # Target: Avoid 1 # Target: Avoid 2 steps # score # score 40 30 20 10 0.0 4.0 # steps # Target: Maze Y Target: Maze M steps # score # score 0 ‘see # Baseline 1 = = # Baseline 3 === # Prog. 4.0 2 col. Figure 13: Training curves for transferring to 8 target games after learning Maze Y first. 5
1606.04671#43
Progressive Neural Networks
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
http://arxiv.org/pdf/1606.04671
Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell
cs.LG
null
null
cs.LG
20160615
20221022
[]
1606.04460
0
6 1 0 2 n u J 4 1 ] L M . t a t s [ 1 v 0 6 4 4 0 . 6 0 6 1 : v i X r a # Model-Free Episodic Control Charles Blundell Google DeepMind [email protected] Benigno Uria Google DeepMind [email protected] Alexander Pritzel Google DeepMind [email protected] # Yazhe Li Google DeepMind [email protected] Avraham Ruderman Google DeepMind [email protected] Joel Z Leibo Google DeepMind [email protected] Jack Rae Google DeepMind [email protected] Daan Wierstra Google DeepMind [email protected] Demis Hassabis Google DeepMind [email protected] # Abstract
1606.04460#0
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
0
6 1 0 2 v o N 0 3 ] E N . s c [ 2 v 4 7 4 4 0 . 6 0 6 1 : v i X r a # Learning to learn by gradient descent by gradient descent # Marcin Andrychowicz1, Misha Denil1, Sergio Gómez Colmenarejo1, Matthew W. Hoffman1, David Pfau1, Tom Schaul1, Brendan Shillingford1,2, Nando de Freitas1,2,3 # 1Google DeepMind # 2University of Oxford # 3Canadian Institute for Advanced Research 1Google DeepMind ?University of Oxford 3Canadian Institute for Advanced Research [email protected] {mdenil,sergomez,mwhoffman,pfau,schaul}@google.com [email protected], [email protected] # Abstract
1606.04474#0
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
1
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast- forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward con- nections play an essential role in propagat- ing the gradients and building a deep topol- ogy of depth 16. On the WMT’14 English- to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a sin- gle NMT model achieves state-of-the-art per- formance and outperforms the best conven- tional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special han- dling of unknown words and model ensem- bling, we
1606.04199#1
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
1
Daan Wierstra Google DeepMind [email protected] Demis Hassabis Google DeepMind [email protected] # Abstract State of the art deep reinforcement learning algorithms take many millions of inter- actions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision- making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains. # 1 Introduction
1606.04460#1
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
1
# Abstract The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art. # Introduction Frequently, tasks in machine learning can be expressed as the problem of optimizing an objective function f (θ) defined over some domain θ ∈ Θ. The goal in this case is to find the minimizer θ∗ = arg minθ∈Θ f (θ). While any method capable of minimizing this objective function can be applied, the standard approach for differentiable functions is some form of gradient descent, resulting in a sequence of updates θt+1 = θt − αt∇f (θt) .
1606.04474#1
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04460
2
# 1 Introduction Deep reinforcement learning has recently achieved notable successes in a variety of domains [23, 32]. However, it is very data inefficient. For example, in the domain of Atari games [2], deep Reinforcement Learning (RL) systems typically require tens of millions of interactions with the game emulator, amounting to hundreds of hours of game play, to achieve human-level performance. As pointed out by [13], humans learn to play these games much faster. This paper addresses the question of how to emulate such fast learning abilities in a machine—without any domain-specific prior knowledge. Current deep RL algorithms may happen upon, or be shown, highly rewarding sequences of actions. Unfortunately, due to their slow gradient-based updates of underlying policy or value functions, these algorithms require large numbers of steps to assimilate such information and translate it into policy improvement. Thus these algorithms lack the ability to rapidly latch onto successful strategies. Episodic control, introduced by [16], is a complementary approach that can rapidly re-enact observed, successful policies. Episodic control records highly rewarding experiences and follows a policy that replays sequences of actions that previously yielded high returns.
1606.04460#2
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
2
θt+1 = θt − αt∇f (θt) . The performance of vanilla gradient descent, however, is hampered by the fact that it only makes use of gradients and ignores second-order information. Classical optimization techniques correct this behavior by rescaling the gradient step using curvature information, typically via the Hessian matrix of second-order partial derivatives—although other choices such as the generalized Gauss-Newton matrix or Fisher information matrix are possible.
1606.04474#2
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
3
# Introduction Neural machine translation (NMT) has attracted a lot of interest in solving the machine translation (Kalchbrenner and (MT) problem in recent years Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015). Unlike conventional statistical ma- chine translation (SMT) systems (Koehn et al., 2003; Durrani et al., 2014) which consist of multi- ple separately tuned components, NMT models en- code the source sequence into continuous represen- tation space and generate the target sequence in an end-to-end fashon. Moreover, NMT models can also be easily adapted to other tasks such as dialog systems (Vinyals and Le, 2015), question answering systems (Yu et al., 2015) and image caption genera- tion (Mao et al., 2015).
1606.04199#3
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
3
In the brain, this form of very fast learning is critically supported by the hippocampus and related medial temporal lobe structures [1, 34]. For example, a rat’s performance on a task requiring navigation to a hidden platform is impaired by lesions to these structures [24, 36]. Hippocampal learning is thought to be instance-based [18, 35], in contrast to the cortical system which represents generalised statistical summaries of the input distribution [20, 27, 41]. The hippocampal system may be used to guide sequential decision-making by co-representing environment states with the returns achieved from the various possible actions. After such encoding, at a given probe state, the return associated to each possible action could be retrieved by pattern completion in the CA3 subregion [9, 21, 26, 40]. The final value achieved by a sequence of actions could quickly become associated with each of its component state-action pairs by the reverse-ordered replay of hippocampal place cell activations that occurs after a rewarding event [7].
1606.04460#3
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
3
Much of the modern work in optimization is based around designing update rules tailored to specific classes of problems, with the types of problems of interest differing between different research communities. For example, in the deep learning community we have seen a proliferation of optimiza- tion methods specialized for high-dimensional, non-convex optimization problems. These include momentum [Nesterov, 1983, Tseng, 1998], Rprop [Riedmiller and Braun, 1993], Adagrad [Duchi et al., 2011], RMSprop [Tieleman and Hinton, 2012], and ADAM [Kingma and Ba, 2015]. More focused methods can also be applied when more structure of the optimization problem is known [Martens and Grosse, 2015]. In contrast, communities who focus on sparsity tend to favor very different approaches [Donoho, 2006, Bach et al., 2012]. This is even more the case for combinatorial optimization for which relaxations are often the norm [Nemhauser and Wolsey, 1988]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
1606.04474#3
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
4
In general, there are two types of NMT topolo- gies: the encoder-decoder network (Sutskever et al., 2014) and the attention network (Bahdanau et al., 2015). The encoder-decoder network represents the source sequence with a fixed dimensional vector and the target sequence is generated from this vector word by word. The attention network uses the repre- sentations from all time steps of the input sequence to build a detailed relationship between the target words and the input words. Recent results show that the systems based on these models can achieve sim- ilar performance to conventional SMT systems (Lu- ong et al., 2015; Jean et al., 2015). However, a single neural model of either of the above types has not been competitive with the best conventional system (Durrani et al., 2014) when evaluated on the WMT’14 English-to-French task. The best BLEU score from a single model with six layers is only 31.5 (Luong et al., 2015) while the conventional method of (Durrani et al., 2014) achieves 37.0.
1606.04199#4
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
4
Humans and animals utilise multiple learning, memory, and decision systems each best suited to different settings [5, 33]. For example, when an accurate model of the environment is available, and there are sufficient time and working memory resources, the best strategy is model-based planning associated with prefrontal cortex [5]. However, when there is no time or no resources available for planning, the less compute-intensive immediate decision systems must be employed [29]. This presents a problem early on in the learning of a new environment as the model-free decision system will be even less accurate in this case since it has not yet had enough repeated experience to learn an accurate value function. In contrast, this is the situation where model-free episodic control may be most useful. Thus the argument for hippocampal involvement in model-free control parallels the argument for its involvement in model-based control. In both cases quick-to-learn instance-based control policies serve as a rough approximation while a slower more generalisable decision system is trained up [16].
1606.04460#4
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
4
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. This industry of optimizer design allows differ- ent communities to create optimization meth- ods which exploit structure in their problems of interest at the expense of potentially poor performance on problems outside of that scope. Moreover the No Free Lunch Theorems for Op- timization [Wolpert and Macready, 1997] show that in the setting of combinatorial optimization, no algorithm is able to do better than a random strategy in expectation. This suggests that spe- cialization to a subclass of problems is in fact the only way that improved performance can be achieved in general. aater i, SE RON — — - — optimizer \ NO a optimize areal In this work we take a different tack and instead propose to replace hand-designed update rules with a learned update rule, which we call the op- timizer g, specified by its own set of parameters φ. This results in updates to the optimizee f of the form Figure 1: The optimizer (left) is provided with performance of the optimizee (right) and proposes updates to increase the optimizee’s performance. [photos: Bobolas, 2009, Maley, 2011]
1606.04474#4
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
5
We focus on improving the single model performance by increasing the model depth. Deep topol- ogy has been proven to outperform the shallow ar- chitecture in computer vision. In the past two years the top positions of the ImageNet contest have al- ways been occupied by systems with tens or even hundreds of layers (Szegedy et al., 2015; He et al., 2016). But in NMT, the biggest depth used success- fully is only six (Luong et al., 2015). We attribute this problem to the properties of the Long Short- Term Memory (LSTM) (Hochreiter and Schmid- huber, 1997) which is widely used in NMT. In the LSTM, there are more non-linear activations than in convolution layers. These activations significantly decrease the magnitude of the gradient in the deep topology, especially when the gradient propagates in recurrent form. There are also many efforts to increase the depth of the LSTM such as the work by Kalchbrenner et al. (2016), where the shortcuts do not avoid the nonlinear and recurrent computation.
1606.04199#5
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
5
The domain of applicability of episodic control may be hopelessly limited by the complexity of the world. In real environments the same exact situation is rarely, if ever, revisited. In RL terms, repeated visits to the exactly the same state are also extremely rare. Here we show that the commonly used Atari environments do not have this property. In fact, we show that the agents developed in this work re-encounter exactly the same Atari states between 10-60% of the time. As expected, the episodic controller works well in such a setting. The key test for this approach is whether it can also work in more realistic environments where states are never repeated and generalisation over similar states is essential. Critically, we also show that our episodic control model still performs well in such (3D) environments where the same state is essentially never re-visited. # 2 The episodic controller
1606.04460#5
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
5
θt+1 = θt + gt(∇f (θt), φ) . A high level view of this process is shown in Figure 1. In what follows we will explicitly model the update rule g using a recurrent neural network (RNN) which maintains its own state and hence dynamically updates as a function of its iterates. # 1.1 Transfer learning and generalization The goal of this work is to develop a procedure for constructing a learning algorithm which performs well on a particular class of optimization problems. Casting algorithm design as a learning problem allows us to specify the class of problems we are interested in through example problem instances. This is in contrast to the ordinary approach of characterizing properties of interesting problems analytically and using these analytical insights to design learning algorithms by hand.
1606.04474#5
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
6
In this work, we introduce a new type of lin- ear connections for multi-layer recurrent networks. These connections, which are called fast-forward connections, play an essential role in building a deep topology with depth of 16. In addition, we in- troduce an interleaved bi-directional architecture to stack LSTM layers in the encoder. This topology can be used for both the encoder-decoder network and the attention network. On the WMT’14 English- to-French task, this is the deepest NMT topology that has ever been investigated. With our deep at- tention model, the BLEU score can be improved to 37.7 outperforming the shallow model which has six layers (Luong et al., 2015) by 6.2 BLEU points. This is also the first time on this task that a single NMT model achieves state-of-the-art performance and outperforms the best conventional SMT sys- tem (Durrani et al., 2014) with an improvement of 0.7. Even without using the attention mechanism, we can still achieve 36.3 with a single model. After model ensembling and unknown word processing, the BLEU score can be further improved to 40.4. When evaluated on the
1606.04199#6
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
6
# 2 The episodic controller In reinforcement learning [e.g.|37], an agent interacts with an environment through a sequence of states, s, € S; actions, a; € A; and rewards r;41 € IR. Actions are determined by the agent’s policy 1 (az|8z), a probability distribution over the action a;. The goal of the agent is to learn a policy that maximises the expected discounted return Ry = yet 7~1r447 where T is the time step at which each episode ends, and + € (0, 1] the discount rate. Upon executing an action a, the agent transitions from state s;, to state $141.
1606.04460#6
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
6
It is informative to consider the meaning of generalization in this framework. In ordinary statistical learning we have a particular function of interest, whose behavior is constrained through a data set of example function evaluations. In choosing a model we specify a set of inductive biases about how we think the function of interest should behave at points we have not observed, and generalization corresponds to the capacity to make predictions about the behavior of the target function at novel points. In our setting the examples are themselves problem instances, which means generalization corresponds to the ability to transfer knowledge between different problems. This reuse of problem structure is commonly known as transfer learning, and is often treated as a subject in its own right. However, by taking a meta-learning perspective, we can cast the problem of transfer learning as one of generalization, which is much better studied in the machine learning community. One of the great success stories of deep-learning is that we can rely on the ability of deep networks to generalize to new examples by learning interesting sub-structures. In this work we aim to leverage this generalization power, but also to lift it from simple supervised learning to the more general setting of optimization. # 1.2 A brief history and related work
1606.04474#6
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
7
achieve 36.3 with a single model. After model ensembling and unknown word processing, the BLEU score can be further improved to 40.4. When evaluated on the subset of the test corpus without unknown words, our model achieves 41.4. As a reference, previous work showed that oracle re- scoring of the 1000-best sequences generated by the SMT model can achieve the BLEU score of about 45 (Sutskever et al., 2014). Our models are also validated on the more difficult WMT’14 English-toGerman task.
1606.04199#7
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
7
Environments with deterministic state transitions and rewards are common in daily experience. For example, in navigation, when you exit a room and then return back, you usually end up in the room where you started. This property of natural environments can be exploited by RL algorithms or brains. However, most existing scalable deep RL algorithms (such as DQN [23] and A3C [22]) do not do so. They were designed with more general environments in mind. Thus, in principle, they could operate in regimes with high degrees of stochasticity in both transitions and rewards. This generality comes at the cost of longer learning times. DQN and A3C both attempt to find a policy with maximal expected return. Evaluating the expected return requires many examples in order to get accurate estimates. Additionally, these algorithms are further slowed down by gradient descent learning, typically in lock-step with the rate at which actions are taken in the environment.
1606.04460#7
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
7
# 1.2 A brief history and related work The idea of using learning to learn or meta-learning to acquire knowledge or inductive biases has a long history [Thrun and Pratt, 1998]. More recently, Lake et al. [2016] have argued forcefully for its importance as a building block in artificial intelligence. Similarly, Santoro et al. [2016] frame multi-task learning as generalization, however unlike our approach they directly train a base learner rather than a training algorithm. In general these ideas involve learning which occurs at two different time scales: rapid learning within tasks and more gradual, meta learning across many different tasks. Perhaps the most general approach to meta-learning is that of Schmidhuber [1992, 1993]—building on work from [Schmidhuber, 1987]—which considers networks that are able to modify their own weights. Such a system is differentiable end-to-end, allowing both the network and the learning 2 algorithm to be trained jointly by gradient descent with few restrictions. However this generality comes at the expense of making the learning rules very difficult to train. Alternatively, the work of Schmidhuber et al. [1997] uses the Success Story Algorithm to modify its search strategy rather than gradient descent; a similar approach has been recently taken in Daniel et al. [2016] which uses reinforcement learning to train a controller for selecting step-sizes.
1606.04474#7
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
8
# 2 Neural Machine Translation Neural machine translation aims at generating the target word sequence y = {y1, . . . , yn} given the source word sequence x = {x1, . . . , xm} with neu- ral models. In this task, the likelihood p(y | x, θ) of the target sequence will be maximized (Forcada and ˜Neco, 1997) with parameter θ to learn: m+1 p(y | v0) = TT rly | yoj1@:8) jel where y0:j−1 is the sub sequence from y0 to yj−1. y0 and ym+1 denote the start mark and end mark of target sequence respectively. The process can be explicitly split into an encod- ing part, a decoding part and the interface between these two parts. In the encoding part, the source se- quence is processed and transformed into a group of vectors e = {e1, · · · , em} for each time step. Fur- ther operations will be used at the interface part to extract the final representation c of the source se- quence from e. At the decoding step, the target se- quence is generated from the representation c.
1606.04199#8
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
8
Given the ubiquity of such near-deterministic situations in the real world, it would be surprising if the brain did not employ specialised learning mechanisms to exploit this structure and thereby learn more quickly in such cases. The episodic controller model of hippocampal instance-based learning we propose here is just such a mechanism. It is a non-parametric model that rapidly records and replays the sequence of actions that so far yielded the highest return from a given start state. In its simplest form, it is a growing table, indexed by states and actions. By analogy with RL value functions, we denote this table QEC(s, a). Each entry contains the highest return ever obtained by taking action a from state s. 2 The episodic control policy picks the action with the highest value in QEC for the given state. At the end of each episode, QEC is updated according to the return received as follows: 7 EC EC Ry, if (st,a4) € Qh, Q™(st,41) BC ( . ) (1) max {Q®°(s;,a1), Ri} otherwise, where Rt is the discounted return received after taking action at in state st. Note that (1) is not a general purpose RL learning update: since the stored value can never decrease, it is not suited to rational action selection in stochastic environments.1
1606.04460#8
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
8
Bengio et al. [1990, 1995] propose to learn updates which avoid back-propagation by using simple parametric rules. In relation to the focus of this paper the work of Bengio et al. could be characterized as learning to learn without gradient descent by gradient descent. The work of Runarsson and Jonsson [2000] builds upon this work by replacing the simple rule with a neural network. Cotter and Conwell [1990], and later Younger et al. [1999], also show fixed-weight recurrent neural networks can exhibit dynamic behavior without need to modify their network weights. Similarly this has been shown in a filtering context [e.g. Feldkamp and Puskorius, 1998], which is directly related to simple multi-timescale optimizers [Sutton, 1992, Schraudolph, 1999]. Finally, the work of Younger et al. [2001] and Hochreiter et al. [2001] connects these different threads of research by allowing for the output of backpropagation from one network to feed into an additional learning network, with both networks trained jointly. Our approach to meta-learning builds on this work by modifying the network architecture of the optimizer in order to scale this approach to larger neural-network optimization problems. # 2 Learning to learn with recurrent neural networks
1606.04474#8
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
9
Recently, there have been two types of NMT mod- els which are different in the interface part. In the encoder-decoder model (Sutskever et al., 2014), a single vector extracted from e is used as the rep- In the attention model (Bahdanau et resentation. al., 2015), c is dynamically obtained according to the relationship between the target sequence and the source sequence. The recurrent neural network (RNN), or its spe- cific form the LSTM, is generally used as the basic unit of the encoding and decoding part. However, the topology of most of the existing models is shal- low. In the attention network, the encoding part and the decoding part have only one LSTM layer respec- tively. In the encoder-decoder network, researchers have used at most six LSTM layers (Luong et al., 2015). Because machine translation is a difficult problem, we believe more complex encoding and decoding architecture is needed for modeling the re- lationship between the source sequence and the tar- get sequence. In this work, we focus on enhancing the complexity of the encoding/decoding architec- ture by increasing the model depth.
1606.04199#9
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
9
Tabular RL methods suffer from two key deficiencies: firstly, for large problems they consume a large amount of memory, and secondly, they lack a way to generalise across similar states. To address the first problem, we limit the size of the table by removing the least recently updated entry once a maximum size has been reached. Such forgetting of older, less frequently accessed memories also occurs in the brain [8]. In large scale RL problems (such as real life) novel states are common; the real world, in general, also has this property. We address the problem of what to do in novel states and how to generalise values across common experiences by taking QEC to be a non-parametric nearest-neighbours model. Let us assume that the state space S is imbued with a metric distance. For states that have never been visited, QEC is approximated by averaging the value of the k nearest states. Thus if s is a novel state then QEC is estimated as OF (5,0) = i na Q) ,a) otherwise, where s(i), i = 1, . . . , k are the k states with the smallest distance to state s.2
1606.04460#9
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
9
# 2 Learning to learn with recurrent neural networks In this work we consider directly parameterizing the optimizer. As a result, in a slight abuse of notation we will write the final optimizee parameters θ∗(f, φ) as a function of the optimizer parameters φ and the function in question. We can then ask the question: What does it mean for an optimizer to be good? Given a distribution of functions f we will write the expected loss as £(0) = Ey [r(0"(F.9))] 2) As noted earlier, we will take the update steps gt to be the output of a recurrent neural network m, parameterized by φ, whose state we will denote explicitly with ht. Next, while the objective function in (2) depends only on the final parameter value, for training the optimizer it will be convenient to have an objective that depends on the entire trajectory of optimization, for some horizon T, # T T L(o) = Ey »~ ws| where A141 =O +9; (3) t=1 bal = m(Vi, he, ).
1606.04474#9
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
10
Deep neural models have been studied in a wide range of problems. In computer vision, models with more than ten convolution layers outperform shallow ones on a series of image tasks in recent years (Srivastava et al., 2015; He et al., 2016; Szegedy et al., 2015). Different kinds of shortcut connections are proposed to decrease the length of the gradient propagation path. Training networks based on LSTM layers, which are widely used in language problems, is a much more challenging task. Because of the existence of many more nonlin- ear activations and the recurrent computation, gradi- ent values are not stable and are generally smaller. Following the same spirit for convolutional net- works, a lot of effort has also been spent on training deep LSTM networks. Yao et al. (2015) introduced depth-gated shortcuts, connecting LSTM cells at ad- jacent layers, to provide a fast way to propagate the gradients. They validated the modification of these shortcuts on an MT task and a language modeling task. However, the best score was obtained using models with three layers. Similarly, Kalchbrenner et al. (2016) proposed a two dimensional structure for the
1606.04199#10
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
10
where s(i), i = 1, . . . , k are the k states with the smallest distance to state s.2 Algorithm 1 describes the most basic form of the model-free episodic control. The algorithm has two phases. First, the policy implied by QEC is executed for a full episode, recording the rewards received at each step. This is done by projecting each observation from the environment ot via an embedding function φ to a state in an appropriate state space: st = φ(ot), then selecting the action with the highest estimated return according to QEC. In the second phase, the rewards, actions and states from an episode are associated via a backward replay process into QEC to improve the policy. Interestingly, this backward replay process is a potential algorithmic instance of the awake reverse replay of hippocampal states shown by [7], although as yet, we are unaware of any experiments testing this interesting use of hippocampus. # Algorithm 1 Model-Free Episodic Control.
1606.04460#10
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
10
# T T L(o) = Ey »~ ws| where A141 =O +9; (3) t=1 bal = m(Vi, he, ). Here wt ∈ R≥0 are arbitrary weights associated with each time-step and we will also use the notation ∇t = ∇θf (θt). This formulation is equivalent to (2) when wt = 1[t = T ], but later we will describe why using different weights can prove useful. We can minimize the value of L(φ) using gradient descent on φ. The gradient estimate ∂L(φ)/∂φ can be computed by sampling a random function f and applying backpropagation to the computational graph in Figure 2. We allow gradients to flow along the solid edges in the graph, but gradients along the dashed edges are dropped. Ignoring gradients along the dashed edges amounts to making the assumption that the gradients of the optimizee do not depend on the optimizer parameters, i.e. ∂∇t
1606.04474#10
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
11
task. However, the best score was obtained using models with three layers. Similarly, Kalchbrenner et al. (2016) proposed a two dimensional structure for the LSTM. Their structure decreases the number of nonlinear activations and path length. However, the gradient propagation still relies on the recurrent computation. The investigations were also made on question-answering to encode the questions, where at most two LSTM layers were stacked (Hermann et al., 2015).
1606.04199#11
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
11
# Algorithm 1 Model-Free Episodic Control. 1: for each episode do 2 for t = 1,2,3,...,T do 3 Receive observation o; from environment. 4: Let s, = $(0). 5: Estimate return for each action a via 6 Let a; = arg max, QFC(s;, a) 7 Take action a;, receive reward rp41 8: end for 9: fort =7,T—1,...,1do 10: Update QF°(s;, az) using R; according to 11: end for 12: end for The episodic controller acts according to the returns recorded in QEC, in an attempt to replay successful sequences of actions and recreate past successes. The values stored in QEC(s, a) thus do 1Following a policy that picks the action with the highest QEC value would yield a strong risk seeking behaviour in stochastic environments. It is also possible to, instead, remove the max operator and store Rt directly. This yields a less optimistic estimate and performed worse in preliminary experiments. 2 In practice, we implemented this by having one kNN buffer for each action a ∈ A and finding the k closest states in each buffer to state s. 3
1606.04460#11
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
11
Examining the objective in (3) we see that the gradient is non-zero only for terms where w; 4 0. If we use wy = 1[t = T] to match the original problem, then gradients of trajectory prefixes are zero and only the final optimization step provides information for training the optimizer. This renders Backpropagation Through Time (BPTT) inefficient. We solve this problem by relaxing the objective such that w; > 0 at intermediate points along the trajectory. This changes the objective function, but allows us to train the optimizer on partial trajectories. For simplicity, in all our experiments we use w, = | for every t. 3 t-2 : tH ti 0. ,. 8 8 Optimizee Ee H + }E i + ty fea Ot-2 fea Ot-1 Oo Optimizer > > Mp 3 Mey i ee Figure 2: Computational graph used for computing the gradient of the optimizer. # 2.1 Coordinatewise LSTM optimizer
1606.04474#11
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
12
Based on the above considerations, we propose new connections to facilitate gradient propagation in the following section. # 3 Deep Topology We build the deep LSTM network with the new pro- posed linear connections. The shortest paths through the proposed connections do not include any non- linear transformations and do not rely on any recur- rent computation. We call these connections fast- forward connections. Within the deep topology, we also introduce an interleaved bi-directional architec- ture to stack the LSTM layers. # 3.1 Network Our entire deep neural network is shown in Fig. 2. This topology can be divided into three parts: the
1606.04199#12
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
12
2 In practice, we implemented this by having one kNN buffer for each action a ∈ A and finding the k closest states in each buffer to state s. 3 not correspond to estimates of the expected return, rather they are estimates of the highest potential return for a given state and action, based upon the states, rewards and actions seen. Computing and behaving according to such a value function is useful in regimes where exploitation is more important than exploration, and where there is relatively little noise in the environment. # 3 Representations In the brain, the hippocampus operates on a representation which notably includes the output of the ventral stream [3, 15, 38]. Thus it is expected to generalise along the dimensions of that representation space [19]. Similarly, the feature mapping, φ, can play a critical role in how our episodic control algorithm performs when it encounters novel states3. Whilst the original observation space could be used, this may not work in practice. For example, each frame in the environments we consider in Section 4 would occupy around 28 KBytes of memory and would require more than 300 gigabytes of memory for our experiments. Instead we consider two different embeddings of observations into a state space, φ, each having quite distinctive properties in setting the inductive bias of the QEC estimator.
1606.04460#12
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
12
Figure 2: Computational graph used for computing the gradient of the optimizer. # 2.1 Coordinatewise LSTM optimizer One challenge in applying RNNs in our setting is that we want to be able to optimize at least tens of thousands of parameters. Optimizing at this scale with a fully connected RNN is not feasible as it would require a huge hidden state and an enormous number of parameters. To avoid this difficulty we will use an optimizer m which operates coordinatewise on the parameters of the objective function, similar to other common update rules like RMSprop and ADAM. This coordinatewise network architecture allows us to use a very small network that only looks at a single coordinate to define the optimizer and share optimizer parameters across different parameters of the optimizee. Different behavior on each coordinate is achieved by using separate activations for each objective function parameter. In addition to allowing us to use a small network for this optimizer, this setup has the nice effect of making the optimizer invariant to the order of parameters in the network, since the same update rule is used independently on each coordinate.
1606.04474#12
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
13
# 3.1 Network Our entire deep neural network is shown in Fig. 2. This topology can be divided into three parts: the encoder part (P-E) on the left, the decoder part (P- D) on the right and the interface between these two parts (P-I) which extracts the representation of the source sequence. We have two instantiations of this topology: Deep-ED and Deep-Att, which corre- spond to the extension of the encoder-decoder net- work and the attention network respectively. Our main innovation is the novel scheme for connecting adjacent recurrent layers. We will start with the ba- sic RNN model for the sake of clarity. Recurrent layer: When an input sequence {x1, . . . , xm} is given to a recurrent layer, the out- put ht at each time step t can be computed as (see Fig. 1 (a)) ht = σ(Wf xt + Wrht−1) = RNN (Wf xt, ht−1) = RNN (ft, ht−1), (2)
1606.04199#13
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
13
One way of decreasing memory and computation requirements is to utilise a random projection of the original observations into a smaller-dimensional space, i.e. ¢ : x — Az, where A € R?*? and F' < D where D is the dimensionality of the observation. For a random matrix A with entries drawn from a standard Gaussian, the Johnson-Lindenstrauss lemma implies that this transformation approximately preserves relative distances in the original space [10]. We expect this representation to be sufficient when small changes in the original observation space correspond to small changes in the underlying return.
1606.04460#13
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
13
We implement the update rule for each coordi- nate using a two-layer Long Short Term Memory (LSTM) network [Hochreiter and Schmidhuber, 1997], using the now-standard forget gate archi- tecture. The network takes as input the opti- mizee gradient for a single coordinate as well as the previous hidden state and outputs the up- date for the corresponding optimizee parameter. We will refer to this architecture, illustrated in Figure 3, as an LSTM optimizer.
1606.04474#13
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
14
where the bias parameter is not included for simplic- ity. We use a red circle and a blue empty square to denote an input and a hidden state. A blue square with a “-” denotes the previous hidden state. A dot- ted line means that the hidden state is used recur- rently. This computation can be equivalently split into two consecutive steps: • Feed-Forward computation: ft = Wf xt. Left part in Fig. 1 (b). “f” block. RNN (ft, ht−1). Right part and the sum operation (+) followed by activation in Fig. 1 (b). “r” block.
1606.04199#14
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
14
For some environments, many aspects of the observation space are irrelevant for value prediction. For example, illumination and textured surfaces in 3D environments (e.g. Labyrinth in Section 4), and scrolling backgrounds in 2D environments (e.g. River Raid in Section 4) may often be irrele- vant. In these cases, small distances in the original observation space may not be correlated with small distances in action-value. A feature extraction method capable of extracting a more abstract representation of the state space (e.g. 3D geometry or the position of sprites in the case of 2D video-games) could result in a more suitable distance calculation. Abstract features can be obtained by using latent-variable probabilistic models. Variational autoencoders (VAE; [12, 30]), further described in the supplementary material, have shown a great deal of promise across a wide range of unsupervised learning problems on images. Interestingly, the latent representations learnt by VAEs in an unsupervised fashion can lie on well structured manifolds capturing salient factors of variation [12, Figures 4(a) and (b)]; [30, Figure 3(b)]. In our experiments, we train the VAEs on
1606.04460#14
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
14
The use of recurrence allows the LSTM to learn dynamic update rules which integrate informa- tion from the history of gradients, similar to momentum. This is known to have many desir- able properties in convex optimization [see e.g. Nesterov, 1983] and in fact many recent learning procedures—such as ADAM—use momentum in their updates. Preprocessing and postprocessing Optimizer inputs and outputs can have very different magni- tudes depending on the class of function being optimized, but neural networks usually work robustly only for inputs and outputs which are neither very small nor very large. In practice rescaling inputs and outputs of an LSTM optimizer using suitable constants (shared across all timesteps and functions f ) is sufficient to avoid this problem. In Appendix A we propose a different method of preprocessing inputs to the optimizer inputs which is more robust and gives slightly better performance. 4 Quadratics MNIST MNIST, 200 steps --- ADAM ra --- RMSprop 0 -=- SGD 10 --- NAG LSTM EA ye, POOL Ne ete OPEL IS, 120 140 160 180 200 Step
1606.04474#14
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
15
RNN (ft, ht−1). Right part and the sum operation (+) followed by activation in Fig. 1 (b). “r” block. For a deep topology with stacked recurrent layers, the input of each block “f” at recurrent layer k (de- noted by f k) is usually the output of block “r” at its previous recurrent layer k − 1 (denoted by hk−1). In our work, we add fast-forward connections (F-F connections) which connect two feed-forward com- putation blocks “f” of adjacent recurrent layers. It means that each block “f” at recurrent layer k takes both the outputs of block “f” and block “r” at its pre- vious layer as input (Fig. 1 (c)). F-F connections are denoted by dashed red lines in Fig. 1 (c) and Fig. 2. The path of F-F connections contains neither non- linear activations nor recurrent computation. It pro- vides a fast path for information to propagate, so we call this path fast-forward connections. block r block f X
1606.04199#15
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
15
salient factors of variation [12, Figures 4(a) and (b)]; [30, Figure 3(b)]. In our experiments, we train the VAEs on frames from an agent acting randomly. Using a different data source will yield different VAE features, and in principle features from one task can be used in another. Furthermore, the distance metric for comparing embeddings could also be learnt. We leave these two interesting extensions to future work.
1606.04460#15
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
15
Figure 4: Comparisons between learned and hand-crafted optimizers performance. Learned optimiz- ers are shown with solid lines and hand-crafted optimizers are shown with dashed lines. Units for the y axis in the MNIST plots are logits. Left: Performance of different optimizers on randomly sampled 10-dimensional quadratic functions. Center: the LSTM optimizer outperforms standard methods training the base network on MNIST. Right: Learning curves for steps 100-200 by an optimizer trained to optimize for 100 steps (continuation of center plot). # 3 Experiments In all experiments the trained optimizers use two-layer LSTMs with 20 hidden units in each layer. Each optimizer is trained by minimizing Equation 3 using truncated BPTT as described in Section 2. The minimization is performed using ADAM with a learning rate chosen by random search. We use early stopping when training the optimizer in order to avoid overfitting the optimizer. After each epoch (some fixed number of learning steps) we freeze the optimizer parameters and evaluate its performance. We pick the best optimizer (according to the final validation loss) and report its average performance on a number of freshly sampled test problems.
1606.04474#15
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
16
block r block f X Figure 1: RNN models. The recurrent use of a hidden state is denoted by dotted lines. A “-” mark denotes the hidden value of the previous time step. (a): Basic RNN. (b): Basic RNN with intermediate computational state and the sum operation (+) followed by activation. It consists of block “f” and block “r”, and is equivalent to (a). (c):Two stacked RNN layers with F-F connections denoted by dashed red lines. in order to learn more temporal dependencies, the sequences can be processed in different directions at each pair of adjacent recurrent layers. This is quantitatively expressed in Eq. 3: t = W k f k f k t = W k t = RNNk (f k hk f · [f k−1 t f xt , hk−1 t ], k > 1 k = 1 t , hk t+(−1)k ) The opposite directions are marked by the direction term (−1)k. At the first recurrent layer, the block “f” takes xt as the input. [ , ] denotes the concatenation of vectors. This is shown in Fig. 1 (c). The two changes are summarized here:
1606.04199#16
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
16
# 4 Experimental results We tested our algorithm on two environments: the Arcade Learning Environment (Atari) [2], and a first-person 3-dimensional environment called Labyrinth [22]. Videos of the trained agents are available online4. The Arcade Learning Environment is a suite of arcade games originally developed for the Atari-2600 console. These games are relatively simple visually but require complex and precise policies to achieve high expected reward [23]. Labyrinth provides a more complex visual experience, but requires relatively simple policies e.g. turning when in the presence of a particular visual cue. The three Labyrinth environments are foraging tasks with appetitive, adversive and sparse appetitive reward structures, respectively. 3One way to understand this is that this feature mapping φ determines the dynamic discretization of the state-space into Voronoi cells implied by the k-nearest neighbours algorithm underlying the episodic controller. 4https://sites.google.com/site/episodiccontrol/ 4 For each environment, we tested the performance of the episodic controller using two embeddings of the observations φ: (1) 64 random-projections of the pixel observations and (2) the 64 parameters of a Gaussian approximation to the posterior over the latent dimensions in a VAE.
1606.04460#16
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
16
We compare our trained optimizers with standard optimizers used in Deep Learning: SGD, RMSprop, ADAM, and Nesterov’s accelerated gradient (NAG). For each of these optimizer and each problem we tuned the learning rate, and report results with the rate that gives the best final error for each problem. When an optimizer has more parameters than just a learning rate (e.g. decay coefficients for ADAM) we use the default values from the optim package in Torch7. Initial values of all optimizee parameters were sampled from an IID Gaussian distribution. # 3.1 Quadratic functions In this experiment we consider training an optimizer on a simple class of synthetic 10-dimensional quadratic functions. In particular we consider minimizing functions of the form J (8) = ||We—yl3 for different 10x10 matrices W and 10-dimensional vectors y whose elements are drawn from an IID Gaussian distribution. Optimizers were trained by optimizing random functions from this family and tested on newly sampled functions from the same distribution. Each function was optimized for 100 steps and the trained optimizers were unrolled for 20 steps. We have not used any preprocessing, nor postprocessing.
1606.04474#16
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
17
t and f k−1 . , our model will be reduced to the • We add a connection between f k t Without f k−1 traditional stacked model. t • We alternate the RNN direction at different lay- ers k with the direction term (−1)k. If we fix the direction term to −1, all layers work in the forward direction. LSTM layer: In our experiments, instead of an RNN, a specific type of recurrent layer called LSTM (Hochreiter and Schmidhuber, 1997; Graves et al., 2009) is used. The LSTM is structurally more (3) complex than the basic RNN in Eq. 2. We de- fine the computation of the LSTM as a function which maps the input f and its state-output pair (h, s) at the previous time step to the current state- output pair. The exact computations for (ht, st) = LSTM(ft, ht−1, st−1) are the following:
1606.04199#17
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
17
For the experiments that use latent features from a VAE, a random policy was used for one million frames at the beginning of training, these one million observations were used to train the VAE. The episodic controller is started after these one million frames, and uses the features obtained from the VAE. Both mean and log-standard-deviation parameters were used as dimensions in the calculation of Euclidean distances. To account for the initial phase of training we displaced performance curves for agents that use VAE features by one million frames. # 4.1 Atari For the Atari experiments we considered a set of five games, namely: Ms. PAC-MAN, Q*bert, River Raid, Frostbite, and Space Invaders. We compared our algorithm to the original DQN algorithm [23], to DQN with prioritised replay [31], and to the asynchronous advantage actor-critic [22] (A3C), a state-of-the-art policy gradient method 5. Following [23], observations were rescaled to 84 by 84 pixels and converted to gray-scale. The Atari simulator produces 60 observations (frames) per second of game play. The agents interact with the environment 15 times per second, as actions are repeated 4 times to decrease the computational requirements. An hour of game play corresponds to approximately 200,000 frames.
1606.04460#17
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
17
Learning curves for different optimizers, averaged over many functions, are shown in the left plot of Figure 4. Each curve corresponds to the average performance of one optimization algorithm on many test functions; the solid curve shows the learned optimizer performance and dashed curves show the performance of the standard baseline optimizers. It is clear the learned optimizers substantially outperform the baselines in this setting. # 3.2 Training a small neural network on MNIST In this experiment we test whether trainable optimizers can learn to optimize a small neural network on MNIST, and also explore how the trained optimizers generalize to functions beyond those they were trained on. To this end, we train the optimizer to optimize a base network and explore a series of modifications to the network architecture and training procedure at test time. 5 MNIST, 40 units MNIST, 2 layers MNIST, ReLU 20 40 60 80 100 20 40 60 80 100 Steps
1606.04474#17
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
18
[z, zρ, zφ, zπ] = ft + Wrht−1 st = σi(z) ◦ σg(zρ + st−1 ◦ θρ) + σg(zφ + st−1 ◦ θφ) ◦ st−1 ht = σo(st) ◦ σg(zπ + st ◦ θπ) (4) where [z, zρ, zφ, zπ] is the concatenation of four vec- tors of equal size, ◦ means element-wise multiplica- tion, σi is the input activation function, σo is the out- put activation function, σg is the activation function for gates, and Wr, θρ, θφ, and θπ are the parame- ters of the LSTM. It is slightly different from the standard notation in that we do not have a matrix to multiply with the input f in our notation. With this notation, we can write down the com- putations for our deep bi-directional LSTM model with F-F connections: fh =WF URL hE, k> 1 ff = When, k=l (nk, s*) = LSTM* (#, WE ayes shar) (5) # (hk
1606.04199#18
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
18
In the episodic controller, the size of each buffer (one per action) of state-value pairs was limited to one million entries. If the buffer is full and a new state-value pair has to be introduced, the least recently used state is discarded. The k-nearest-neighbour lookups used k = 11. The discount rate was set to y = 1. Exploration is achieved by using an ¢-greedy policy with « = 0.005. We found that higher exploration rates were not as beneficial, as more exploration makes exploiting what is known harder. Note that previously published exploration rates (e.g., [22||23]) are at least a factor of ten higher. Thus interestingly, our method attains good performance on a wide range of domains with relatively little random exploration. Results are shown in the top two rows of Figure 1. In terms of data efficiency the episodic controller outperformed all other algorithms during the initial learning phase of all games. On Q*bert and River Raid, the episodic controller is eventually overtaken by some of the parametric controllers (not shown in Figure 1). After an initial phase of fast learning the episodic controller was limited by the decrease in the relative amount of new experience that could be obtained in each episode as these become longer. In contrast the parametric controllers could utilise their non-local generalisation capabilities to handle the later stages of the games.
1606.04460#18
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
18
5 MNIST, 40 units MNIST, 2 layers MNIST, ReLU 20 40 60 80 100 20 40 60 80 100 Steps Figure 5: Comparisons between learned and hand-crafted optimizers performance. Units for the y axis are logits. Left: Generalization to the different number of hidden units (40 instead of 20). Center: Generalization to the different number of hidden layers (2 instead of 1). This optimization problem is very hard, because the hidden layers are very narrow. Right: Training curves for an MLP with 20 hidden units using ReLU activations. The LSTM optimizer was trained on an MLP with sigmoid activations. LSTM ADAM NAG wn 3 Layers e e e 107 T 5 20 35 50 5 20 35 50 5 20 35 50 Hidden units # ma = Figure 6: Systematic study of final MNIST performance as the optimizee architecture is varied, using sigmoid non-linearities. The vertical dashed line in the left-most plot denotes the architecture at which the LSTM is trained and the horizontal line shows the final performance of the trained optimizer in this setting.
1606.04474#18
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
19
# (hk where xt is the input to the deep bi-directional LSTM model. For the encoder, xt is the embedding of the tth word in the source sentence. For the de- coder xt is the concatenation of the embedding of the tth word in the target sentence and the encoder representation for step t. In our final model two additional operations are used with Eq. 5, which is shown in Eq. 6. Half(f ) denotes the first half of the elements of f , and Dr(h) is the dropout operation (Hinton et al., 2012) which randomly sets an element of h to zero with a cer- tain probability. The use of Half(·) is to reduce the parameter size and does not affect the perfor- mance. We observed noticeable performance degra- dation when using only the first third of the elements of “f”. t = W k f k f · [Half(f k−1 t ), Dr(hk−1 t )], k > 1 (6) With the F-F connections, we build a fast channel to propagate the gradients in the deep topology. F-F (5) Encoder Decoder i le] t \ ta at f f OO (a= aoan SO OC ¥ f
1606.04199#19
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
19
The two different embeddings (random projections and VAE) did not have a notable effect on the performance of the episodic control policies. Both representations proved more data efficient than the parametric policies. The only exception is Frostbite where the VAE features perform noticeably worse. This may be due to the inability of a random policy to reach very far in the game, which results in a very poor training-set for the VAE. Deep Q-networks and A3C exhibited a slow pace of policy improvement in Atari. For Frostbite and Ms. PAC-MAN, this has, sometimes, been attributed to naive exploration techniques [13}|28]. Our results demonstrate that a simple exploration technique like e-greedy can result in much faster policy improvements when combined with a system that is able to learn in a one-shot fashion. The Atari environment has deterministic transitions and rewards. Each episode starts at one of thirty possible initial states. Therefore a sizeable percentage of states-action pairs are exactly matched in the buffers of Q-values: about 10% for Frostbite, 60% for Q*bert, 50% for Ms. PAC-MAN, 45% for Space Invaders, and 10% for River Raid. In the next section we report experiments on a set of more realistic environments where the same exact experience is seldom encountered twice.
1606.04460#19
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
19
In this setting the objective function f (θ) is the cross entropy of a small MLP with parameters θ. The values of f as well as the gradients ∂f (θ)/∂θ are estimated using random minibatches of 128 examples. The base network is an MLP with one hidden layer of 20 units using a sigmoid activation function. The only source of variability between different runs is the initial value θ0 and randomness in minibatch selection. Each optimization was run for 100 steps and the trained optimizers were unrolled for 20 steps. We used input preprocessing described in Appendix A and rescaled the outputs of the LSTM by the factor 0.1. Learning curves for the base network using different optimizers are displayed in the center plot of Figure 4. In this experiment NAG, ADAM, and RMSprop exhibit roughly equivalent performance the LSTM optimizer outperforms them by a significant margin. The right plot in Figure 4 compares the performance of the LSTM optimizer if it is allowed to run for 200 steps, despite having been trained to optimize for 100 steps. In this comparison we re-used the LSTM optimizer from the previous experiment, and here we see that the LSTM optimizer continues to outperform the baseline optimizers on this task.
1606.04474#19
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04460
20
5We are forever indebted to Tom Schaul for the prioritised replay baseline and Andrei Rusu for the A3C baseline. 5 Ms. Pac-Man Space Invaders Frostbite 2.5 4.0 ¥ 6 5 2.0 35 i) . 3.0 34 2.5 i 15 . F 3 2.0 £ 1.0 3? 0 S 0.5 . gl 0.5 % 0 0.0 0.0 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Q*bert River Raid wu 14 12 212 rf 10 10 2. ® E 6 <£ 6 S 2 2 0 ) 0 10 20 30 40 50 0 10 20 30 40 50 35 Forage 14 Forage & Avoid Double T-Maze 30 25 20 15 10 12 10 Scores ON BODO PORN WAU 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Millions of Frames Millions of Frames Millions of Frames —— DQN = Prioritised DQN ——— A3C ——= EC-VAE —— EC-RP
1606.04460#20
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
20
Generalization to different architectures Figure 5 shows three examples of applying the LSTM optimizer to train networks with different architectures than the base network on which it was trained. The modifications are (from left to right) (1) an MLP with 40 hidden units instead of 20, (2) a network with two hidden layers instead of one, and (3) a network using ReLU activations instead of sigmoid. In the first two cases the LSTM optimizer generalizes well, and continues to outperform the hand-designed baselines despite operating outside of its training regime. However, changing the activation function to ReLU makes the dynamics of the learning procedure sufficiently different that the learned optimizer is no longer able to generalize. Finally, in Figure 6 we show the results of systematically varying the tested architecture; for the LSTM results we again used the optimizer trained using 1 layer of 20 units and sigmoid non-linearities. Note that in this setting where the 6 1 2 CIFAR-10 CIFAR-5 CIFAR-2 - ADAM - RMSprop - SGD - NAG 200 400 600 800 1000 200 400 600 800 1000
1606.04474#20
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
21
connections can accelerate the model convergence and while improving the performance. A similar idea was also used in (He et al., 2016; Zhou and Xu, 2015). Encoder: The LSTM layers are stacked following Eq. 5. We call this type of encoder interleaved bi- directional encoder. In addition, there are two sim- ilar columns (a1 and a2) in the encoder part. Each column consists of ne stacked LSTM layers. There is no connection between the two columns. The first layers of the two columns process the word repre- sentations of the source sequence in different direc- tions. At the last LSTM layers, there are two groups of vectors representing the source sequence. The group size is the same as the length of the input se- quence. Interface: Prior encoder-decoder models and atten- tion models are different in their method of extract- ing the representations of the source sequences. In our work, as a consequence of the introduced F-F connections, we have 4 output vectors (hne t and f ne t of both columns). The representations are modified for both Deep-ED and Deep-Att. mentary information but do not affect the perfor- mance much. et is used as the final representation ct.
1606.04199#21
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
21
—— DQN = Prioritised DQN ——— A3C ——= EC-VAE —— EC-RP Figure 1: Average reward vs. number of frames (in millions) experienced for five Atari games and three Labyrinth environments. Dark curves show the mean of five runs (results from only one run were available for DQN baselines) initialised with different random number seeds. Light shading shows the standard error of the mean across runs. Episodic controllers (orange and blue curves) outperform parametric Q-function estimators (light green and pink curves) and A3C (dark green curve) in the initial phase of learning. VAE curves start after one million frames to account for their training using a random policy. # 4.2 Labyrinth
1606.04460#21
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
21
1 2 CIFAR-10 CIFAR-5 CIFAR-2 - ADAM - RMSprop - SGD - NAG 200 400 600 800 1000 200 400 600 800 1000 Figure 7: Optimization performance on the CIFAR-10 dataset and subsets. Shown on the left is the LSTM optimizer versus various baselines trained on CIFAR-10 and tested on a held-out test set. The two plots on the right are the performance of these optimizers on subsets of the CIFAR labels. The additional optimizer LSTM-sub has been trained only on the heldout labels and is hence transferring to a completely novel dataset. test-set problems are similar enough to those in the training set we see even better generalization than the baseline optimizers. # 3.3 Training a convolutional network on CIFAR-10 Next we test the performance of the trained neural optimizers on optimizing classification performance for the CIFAR-10 dataset [Krizhevsky, 2009]. In these experiments we used a model with both convolutional and feed-forward layers. In particular, the model used for these experiments includes three convolutional layers with max pooling followed by a fully-connected layer with 32 hidden units; all non-linearities were ReLU activations with batch normalization.
1606.04474#21
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
22
mentary information but do not affect the perfor- mance much. et is used as the final representation ct. For Deep-Att, we do not need the above two op- erations. We only concatenate the 4 output vectors at each time step to obtain et, and a soft attention mechanism (Bahdanau et al., 2015) is used to calcu- late the final representation ct from et. et is summa- rized as: # Deep-ED: et m , Max(hne,a2 [hne,a1 t ), Max(f ne,a1 t ), Max(f ne,a2 t Deep-Att: et [hne,a1 t , hne,a2 t , f ne,a1 t , f ne,a2 t ] )] (7) Note that the vector dimensionality of f is four times larger than that of h (see Eq. 4). ct is summarized as: Deep-ED: c; =e, (const) m Deep-Att: c; = S- ay Wey v=l (8) a," is the normalized attention weight computed by:
1606.04199#22
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]
1606.04460
22
# 4.2 Labyrinth The Labyrinth experiments involved three levels (screenshots are shown in Figure 2). The environment runs at 60 observations (frames) per simulated second of physical time. Observations are gray-scale images of 84 by 84 pixels. The agent interacts with the environment 15 times per second; actions are automatically repeated for 4 frames (to reduce computational requirements). The agent has eight different actions available to it (move-left, move-right, turn-left, turn-right, move-forward, move- backwards, move-forward and turn-left, move-forward and turn-right). In the episodic controller, the size of each buffer (one per action) of state-value pairs was limited to one hundred thousand entries. When the buffer was full and a new state-value pair had to be introduced, the least recently used 6 (b) (c) (a)
1606.04460#22
Model-Free Episodic Control
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon first discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difficult sequential decision-making tasks. We demonstrate that it not only attains a highly rewarding strategy significantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains.
http://arxiv.org/pdf/1606.04460
Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
stat.ML, cs.LG, q-bio.NC
null
null
stat.ML
20160614
20160614
[ { "id": "1512.08457" }, { "id": "1604.00289" } ]
1606.04474
22
The coordinatewise network decomposition introduced in Section 2.1—and used in the previous experiment—utilizes a single LSTM architecture with shared weights, but separate hidden states, for each optimizee parameter. We found that this decomposition was not sufficient for the model architecture introduced in this section due to the differences between the fully connected and convo- lutional layers. Instead we modify the optimizer by introducing two LSTMs: one proposes parameter updates for the fully connected layers and the other updates the convolutional layer parameters. Like the previous LSTM optimizer we still utilize a coordinatewise decomposition with shared weights and individual hidden states, however LSTM weights are now shared only between parameters of the same type (i.e. fully-connected vs. convolutional). The performance of this trained optimizer compared against the baseline techniques is shown in Figure 7. The left-most plot displays the results of using the optimizer to fit a classifier on a held-out test set. The additional two plots on the right display the performance of the trained optimizer on modified datasets which only contain a subset of the labels, i.e. the CIFAR-2 dataset only contains data corresponding to 2 of the 10 labels. Additionally we include an optimizer LSTM-sub which was only trained on the held-out labels.
1606.04474#22
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.
http://arxiv.org/pdf/1606.04474
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas
cs.NE, cs.LG
null
null
cs.NE
20160614
20161130
[]
1606.04199
23
a," is the normalized attention weight computed by: For Deep-ED, et is static and consists of four 1: The last time step output hne m of the parts. 2: Max-operation Max(·) of hne first column. t at all time steps of the second column, denoted by Max(hne,a2 ). Max(·) denotes obtaining the maximal value for each dimension over t. 3: Max(f ne,a1 ). The max-operation t t and last time step state extraction provide compliexp(a(Wpev, nye’)) ry 1,de ivr exp(a(Wpee, hy“7°)) (9) Ot" h1,dec t−1 is the first hidden layer output in the decoding part. a(·) is an alignment model described in (Bah- danau et al., 2015). For Deep-Att, in order to re- duce the memory cost, we linearly project (with Wp)
1606.04199#23
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
http://arxiv.org/pdf/1606.04199
Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
cs.CL, cs.LG
TACL 2016
null
cs.CL
20160614
20160723
[ { "id": "1508.03790" }, { "id": "1510.07526" } ]