id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1608.03983#28
SGDR: Stochastic Gradient Descent with Warm Restarts
9 Published as a conference paper at ICLR 2017 WRN-28-10 on downsampled 32x32 ImageNet WRN-28-10 on downsampled 32x32 ImageNet 60 Default Default SGDR T= 1, Typ =2 SGDR Ty = 1, Typ = 2 55 SGDR T, = 10,7, SGDR T, = 10, T,., £50 & o GB 2 45 ire) b & ° F 40 35 # o GB 2 ~~ # ° F Figure 5:
1608.03983#27
1608.03983#29
1608.03983
[ "1703.05051" ]
1608.03983#29
SGDR: Stochastic Gradient Descent with Warm Restarts
Top-1 and Top-5 test errors obtained by SGD with momentum with the default learning rate schedule, SGDR with T0 = 1, Tmult = 2 and SGDR with T0 = 10, Tmult = 2 on WRN-28-10 trained on a version of ImageNet, with all images from all 1000 classes downsampled to 32 Ã 32 pixels. The same baseline data augmentation as for the CIFAR datasets is used. Four settings of the initial learning rate are considered: 0.050, 0.025, 0.01 and 0.005.
1608.03983#28
1608.03983#30
1608.03983
[ "1703.05051" ]
1608.03983#30
SGDR: Stochastic Gradient Descent with Warm Restarts
# 5 DISCUSSION Our results suggest that even without any restarts the proposed aggressive learning rate schedule given by eq. (5) is competitive w.r.t. the default schedule when training WRNs on the CIFAR- 10 (e.g., for T0 = 200, Tmult = 1) and CIFAR-100 datasets. In practice, the proposed schedule requires only two hyper-parameters to be deï¬ ned: the initial learning rate and the total number of epochs. We found that the anytime performance of SGDR remain similar when shorter epochs are considered (see section 8.1 in the Supplemenary Material). One should not suppose that the parameter values used in this study and many other works with (Residual) Neural Networks are selected to demonstrate the fastest decrease of the training error. Instead, the best validation or / and test errors are in focus. Notably, the validation error is rarely used when training Residual Neural Networks because the recommendation is deï¬ ned by the ï¬ nal solution (in our approach, the ï¬ nal solution of each run). One could use the validation error to determine the optimal initial learning rate and then run on the whole dataset; this could further improve results. The main purpose of our proposed warm restart scheme for SGD is to improve its anytime perfor- mance. While we mentioned that restarts can be useful to deal with multi-modal functions, we do not claim that we observe any effect related to multi-modality. As we noted earlier, one could decrease ηi max and ηi min at every new warm restart to control the amount of divergence. If new restarts are worse than the old ones w.r.t. validation error, then one might also consider going back to the last best solution and perform a new restart with adjusted hyperparameters. Our results reproduce the ï¬ nding by Huang et al. (2016a) that intermediate models generated by SGDR can be used to build efï¬ cient ensembles at no cost. This ï¬ nding makes SGDR especially attractive for scenarios when ensemble building is considered. # 6 CONCLUSION In this paper, we investigated a simple warm restart mechanism for SGD to accelerate the training of DNNs. Our SGDR simulates warm restarts by scheduling the learning rate to achieve competitive results on CIFAR-10 and CIFAR-100 roughly two to four times faster.
1608.03983#29
1608.03983#31
1608.03983
[ "1703.05051" ]
1608.03983#31
SGDR: Stochastic Gradient Descent with Warm Restarts
We also achieved new state- of-the-art results with SGDR, mainly by using even wider WRNs and ensembles of snapshots from 10 Published as a conference paper at ICLR 2017 SGDRâ s trajectory. Future empirical studies should also consider the SVHN, ImageNet and MS COCO datasets, for which Residual Neural Networks showed the best results so far. Our preliminary results on a dataset of EEG recordings suggest that SGDR delivers better and better results as we carry out more restarts and use more model snapshots. The results on our downsampled ImageNet dataset suggest that SGDR might also reduce the problem of learning rate selection because the annealing and restarts of SGDR scan / consider a range of learning rate values. Future work should consider warm restarts for other popular training algorithms such as AdaDelta (Zeiler, 2012) and Adam (Kingma & Ba, 2014). Alternative network structures should be also considered; e.g., soon after our initial arXiv report (Loshchilov & Hutter, 2016), Zhang et al. (2016); Huang et al. (2016b); Han et al. (2016) reported that WRNs models can be replaced by more memory-efï¬ cient models. Thus, it should be tested whether our results for individual models and ensembles can be further improved by using their networks instead of WRNs. Deep compression methods (Han et al., 2015) can be used to reduce the time and memory costs of DNNs and their ensembles.
1608.03983#30
1608.03983#32
1608.03983
[ "1703.05051" ]
1608.03983#32
SGDR: Stochastic Gradient Descent with Warm Restarts
# 7 ACKNOWLEDGMENTS This work was supported by the German Research Foundation (DFG), under the BrainLinksBrain- Tools Cluster of Excellence (grant number EXC 1086). We thank Gao Huang, Kilian Quirin Wein- berger, Jost Tobias Springenberg, Mark Schmidt and three anonymous reviewers for their helpful comments and suggestions. We thank Robin Tibor Schirrmeister for providing his pipeline for the EEG experiments and helping integrating SGDR. # REFERENCES Antoine Bordes, L´eon Bottou, and Patrick Gallinari. Sgd-qn: Careful quasi-newton stochastic gra- dient descent. The Journal of Machine Learning Research, 10:1737â 1754, 2009. Anna Choromanska, Mikael Henaff, Michael Mathieu, G´erard Ben Arous, and Yann LeCun. The loss surface of multilayer networks. arXiv preprint arXiv:1412.0233, 2014. Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op- timization. In Advances in Neural Information Processing Systems, pp. 2933â 2941, 2014. Yann N Dauphin, Harm de Vries, Junyoung Chung, and Yoshua Bengio. Rmsprop and equilibrated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390, 2015. L. Deng, G. Hinton, and B. Kingsbury. New types of deep neural network learning for speech recognition and related applications:
1608.03983#31
1608.03983#33
1608.03983
[ "1703.05051" ]
1608.03983#33
SGDR: Stochastic Gradient Descent with Warm Restarts
An overview. In Proc. of ICASSPâ 13, 2013. J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In Proc. of ICMLâ 14, 2014. Reeves Fletcher and Colin M Reeves. Function minimization by conjugate gradients.
1608.03983#32
1608.03983#34
1608.03983
[ "1703.05051" ]
1608.03983#34
SGDR: Stochastic Gradient Descent with Warm Restarts
The computer journal, 7(2):149â 154, 1964. Kenji Fukumizu and Shun-ichi Amari. Local minima and plateaus in hierarchical structures of multilayer perceptrons. Neural Networks, 13(3):317â 327, 2000. Dongyoon Han, Jiwhan Kim, and Junmo Kim. Deep pyramidal residual networks. arXiv preprint arXiv:1610.02915, 2016. Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
1608.03983#33
1608.03983#35
1608.03983
[ "1703.05051" ]
1608.03983#35
SGDR: Stochastic Gradient Descent with Warm Restarts
Nikolaus Hansen. Benchmarking a BI-population CMA-ES on the BBOB-2009 function testbed. In Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computa- tion Conference: Late Breaking Papers, pp. 2389â 2396. ACM, 2009. 11 Published as a conference paper at ICLR 2017 Nikolaus Hansen and Stefan Kern. Evaluating the cma evolution strategy on multimodal test functions. In International Conference on Parallel Problem Solving from Nature, pp. 282â 291. Springer, 2004. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. arXiv preprint arXiv:1512.03385, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016. Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, and Kilian Q. Weinberger.
1608.03983#34
1608.03983#36
1608.03983
[ "1703.05051" ]
1608.03983#36
SGDR: Stochastic Gradient Descent with Warm Restarts
Snapshot ensembles: Train 1, get m for free. ICLR 2017 submission, 2016a. Gao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016b. Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochas- tic depth. arXiv preprint arXiv:1603.09382, 2016c. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classiï¬ cation with deep convolutional neural networks. In Proc. of NIPSâ 12, pp. 1097â 1105, 2012a.
1608.03983#35
1608.03983#37
1608.03983
[ "1703.05051" ]
1608.03983#37
SGDR: Stochastic Gradient Descent with Warm Restarts
Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬ cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â 1105, 2012b. Dong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1-3):503â 528, 1989. Ilya Loshchilov and Frank Hutter.
1608.03983#36
1608.03983#38
1608.03983
[ "1703.05051" ]
1608.03983#38
SGDR: Stochastic Gradient Descent with Warm Restarts
SGDR: Stochastic Gradient Descent with Restarts. arXiv preprint arXiv:1608.03983, 2016. Ilya Loshchilov, Marc Schoenauer, and Michele Sebag. Alternative restart strategies for CMA-ES. In International Conference on Parallel Problem Solving from Nature, pp. 296â 305. Springer, 2012. Yurii Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2). In Soviet Mathematics Doklady, volume 27, pp. 372â 376, 1983. Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013.
1608.03983#37
1608.03983#39
1608.03983
[ "1703.05051" ]
1608.03983#39
SGDR: Stochastic Gradient Descent with Warm Restarts
Brendan Oâ Donoghue and Emmanuel Candes. Adaptive restart for accelerated gradient schemes. arXiv preprint arXiv:1204.3982, 2012. Hadi Pouransari and Saman Ghili. Tiny imagenet visual recognition challenge. CS231 course at STANFORD, 2015. Michael James David Powell. Restart procedures for the conjugate gradient method. Mathematical programming, 12(1):241â 254, 1977. Mike Preuss.
1608.03983#38
1608.03983#40
1608.03983
[ "1703.05051" ]
1608.03983#40
SGDR: Stochastic Gradient Descent with Warm Restarts
Niching the CMA-ES via nearest-better clustering. In Proceedings of the 12th annual conference companion on Genetic and evolutionary computation, pp. 1711â 1718. ACM, 2010. Mike Preuss. Niching methods and multimodal optimization performance. In Multimodal Optimiza- tion by Means of Evolutionary Algorithms, pp. 115â 137. Springer, 2015. Raymond Ros. Benchmarking the bfgs algorithm on the bbob-2009 function testbed. In Proceed- ings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Con- ference: Late Breaking Papers, pp. 2409â 2414. ACM, 2009. 12 Published as a conference paper at ICLR 2017
1608.03983#39
1608.03983#41
1608.03983
[ "1703.05051" ]
1608.03983#41
SGDR: Stochastic Gradient Descent with Warm Restarts
Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, and Tonio Ball. Deep learning with convolutional neural networks for brain mapping and decoding of movement-related information from the human eeg. arXiv preprint arXiv:1703.05051, 2017. Leslie N Smith. No more pesky learning rate guessing games. arXiv preprint arXiv:1506.01186, 2015. Leslie N Smith. Cyclical arXiv:1506.01186v3, 2016. learning rates for training neural networks. arXiv preprint Tianbao Yang and Qihang Lin. Stochastic subgradient methods with linear convergence for polyhe- dral convex optimization. arXiv preprint arXiv:1510.01444, 2015. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. Matthew D Zeiler. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. K. Zhang, M. Sun, T. X. Han, X. Yuan, L. Guo, and T. Liu.
1608.03983#40
1608.03983#42
1608.03983
[ "1703.05051" ]
1608.03983#42
SGDR: Stochastic Gradient Descent with Warm Restarts
Residual Networks of Residual Net- works: Multilevel Residual Networks. ArXiv e-prints, August 2016. 13 Published as a conference paper at ICLR 2017 # 8 SUPPLEMENTARY MATERIAL CIFAR-10 30 Default â â SGDR 25 R 8 Test error (%) a 0 20 40 60 80 100 Epochs Figure 6: The median results of 5 runs for the best learning rate settings considered for WRN-28-1. 50K VS 100K EXAMPLES PER EPOCH Our data augmentation procedure code is inherited from the Lasagne Recipe code for ResNets where ï¬ ipped images are added to the training set. This doubles the number of training examples per epoch and thus might impact the results because hyperparameter values deï¬ ned as a function of epoch index have a different meaning. While our experimental results given in Table 1 reproduced the results obtained by Zagoruyko & Komodakis (2016), here we test whether SGDR still makes sense for WRN-28-1 (i.e., ResNet with 28 layers) where one epoch corresponds to 50k training examples. We investigate different learning rate values for the default learning rate schedule (4 values out of [0.01, 0.025, 0.05, 0.1]) and SGDR (3 values out of [0.025, 0.05, 0.1]). In line with the results given in the main paper, Figure 6 suggests that SGDR is competitive in terms of anytime performance.
1608.03983#41
1608.03983#43
1608.03983
[ "1703.05051" ]
1608.03983#43
SGDR: Stochastic Gradient Descent with Warm Restarts
14 Published as a conference paper at ICLR 2017 # loss # WRN-28-10 on CIFAR-10 # loss # WRN-28-10 on CIFAR-100 â G-â Default, r=0.1 Default, Ir=0.05 = T= 50, T= 1 T y= 100, Ty = 0, Tutt * T + Traut = 2 0.8 ° & 1 1 0.6 So a 0.4 S ES 0.2 o wD ° Training crossâ entropy + regularization Training crossâ entropy + regularization 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on CIFAR-10 WRN-28-10 on CIFAR-100 ° o Test cross-entropy loss Test cross-entropy loss ° & 0.7 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on CIFAR-10 WRN-28-10 on CIFAR-100 5 1 21 20.5 s a ny iS} Test error (%) ES Test error (%) oS a 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs
1608.03983#42
1608.03983#44
1608.03983
[ "1703.05051" ]
1608.03983#44
SGDR: Stochastic Gradient Descent with Warm Restarts
Figure 7: Training cross-entropy + regularization loss (top row), test loss (middle row) and test error (bottom row) on CIFAR-10 (left column) and CIFAR-100 (right column). 15 Published as a conference paper at ICLR 2017 WRN-28-10 on downsampled 32x32 ImageNet 90 85 â O-â Default, Ir=0.050 80}:| ER Default, Ir=0.015 Default, Ir=0.005 SGDR, Ir=0.050 757] â â ¬- scor, 'r=0.015 â fâ SEDR, Ir=0.005 5 10 15 20 25 30 35 40 Epochs Top-5 test error (%) 70 Figure 8: Top-5 test errors obtained by SGD with momentum with the default learning rate schedule and SGDR with T0 = 1, Tmult = 2 on WRN-28-10 trained on a version of ImageNet, with all images from all 1000 classes downsampled to 32 Ã 32 pixels. The same baseline data augmentation as for the CIFAR datasets is used. Three settings of the initial learning rate are considered: 0.050, 0.015 and 0.005. In contrast to the experiments described in the main paper, here, the dataset is permuted only within 10 subgroups each formed from 100 classes which makes good generalization much harder to achieve for both algorithms. An interpretation of SGDR results given here might be that while the initial learning rate seems to be very important, SGDR reduces the problem of improper selection of the latter by scanning / annealing from the initial learning rate to 0.
1608.03983#43
1608.03983#45
1608.03983
[ "1703.05051" ]
1608.03983#45
SGDR: Stochastic Gradient Descent with Warm Restarts
16
1608.03983#44
1608.03983
[ "1703.05051" ]
1607.07086#0
An Actor-Critic Algorithm for Sequence Prediction
7 1 0 2 r a M 3 ] G L . s c [ 3 v 6 8 0 7 0 . 7 0 6 1 : v i X r a Published as a conference paper at ICLR 2017 AN ACTOR-CRITIC ALGORITHM FOR SEQUENCE PREDICTION Dzmitry Bahdanau Philemon Brakel Kelvin Xu Anirudh Goyal Universit´e de Montr´eal Ryan Lowe Joelle Pineauâ McGill University # Aaron Courvilleâ Universit´e de Montr´eal Yoshua Bengioâ Universit´e de Montr´eal # ABSTRACT
1607.07086#1
1607.07086
[ "1512.02433" ]
1607.07086#1
An Actor-Critic Algorithm for Sequence Prediction
We present an approach to training neural networks to generate sequences using actor-critic methods from reinforcement learning (RL). Current log-likelihood training methods are limited by the discrepancy between their training and testing modes, as models must generate tokens conditioned on their previous guesses rather than the ground-truth tokens. We address this problem by introducing a critic network that is trained to predict the value of an output token, given the policy of an actor network. This results in a training procedure that is much closer to the test phase, and allows us to directly optimize for a task-speciï¬ c score such as BLEU. Crucially, since we leverage these techniques in the supervised learning setting rather than the traditional RL setting, we condition the critic network on the ground-truth output. We show that our method leads to improved performance on both a synthetic task, and for German-English machine translation. Our analysis paves the way for such methods to be applied in natural language generation tasks, such as machine translation, caption generation, and dialogue modelling.
1607.07086#0
1607.07086#2
1607.07086
[ "1512.02433" ]
1607.07086#2
An Actor-Critic Algorithm for Sequence Prediction
# INTRODUCTION In many important applications of machine learning, the task is to develop a system that produces a sequence of discrete tokens given an input. Recent work has shown that recurrent neural networks (RNNs) can deliver excellent performance in many such tasks when trained to predict the next output token given the input and previous tokens. This approach has been applied successfully in machine translation (Sutskever et al., 2014; Bahdanau et al., 2015), caption generation (Kiros et al., 2014; Donahue et al., 2015; Vinyals et al., 2015; Xu et al., 2015; Karpathy & Fei-Fei, 2015), and speech recognition (Chorowski et al., 2015; Chan et al., 2015). The standard way to train RNNs to generate sequences is to maximize the log-likelihood of the â
1607.07086#1
1607.07086#3
1607.07086
[ "1512.02433" ]
1607.07086#3
An Actor-Critic Algorithm for Sequence Prediction
correctâ token given a history of the previous â correctâ ones, an approach often called teacher forcing. At evaluation time, the output sequence is often produced by an approximate search for the most likely candidate according to the learned distribution. During this search, the model is conditioned on its own guesses, which may be incorrect and thus lead to a compounding of errors (Bengio et al., 2015). This can become especially problematic for longer sequences. Due to this discrepancy between training and testing conditions, it has been shown that maximum likelihood training can be suboptimal (Bengio et al., 2015; Ranzato et al., 2015). In these works, the authors argue that the network should be trained to continue generating correctly given the outputs already produced by the model, rather than the ground-truth reference outputs from the data. This gives rise to the challenging problem of determining the target for the next network output. Bengio et al. (2015) use the token k from the ground-truth answer as the target for the network at step k, whereas Ranzato et al. (2015) rely on the REINFORCE algorithm (Williams, 1992) to decide whether or not the tokens
1607.07086#2
1607.07086#4
1607.07086
[ "1512.02433" ]
1607.07086#4
An Actor-Critic Algorithm for Sequence Prediction
# â CIFAR Senior Fellow â CIFAR Fellow 1 Published as a conference paper at ICLR 2017 from a sampled prediction lead to a high task-speciï¬ c score, such as BLEU (Papineni et al., 2002) or ROUGE (Lin & Hovy, 2003). In this work, we propose and study an alternative procedure for training sequence prediction networks that aims to directly improve their test time metrics (which are typically not the log-likelihood). In particular, we train an additional network called the critic to output the value of each token, which we deï¬ ne as the expected task-speciï¬ c score that the network will receive if it outputs the token and continues to sample outputs according to its probability distribution. Furthermore, we show how the predicted values can be used to train the main sequence prediction network, which we refer to as the actor. The theoretical foundation of our method is that, under the assumption that the critic computes exact values, the expression that we use to train the actor is an unbiased estimate of the gradient of the expected task-speciï¬
1607.07086#3
1607.07086#5
1607.07086
[ "1512.02433" ]
1607.07086#5
An Actor-Critic Algorithm for Sequence Prediction
c score. Our approach draws inspiration and borrows the terminology from the ï¬ eld of reinforcement learning (RL) (Sutton & Barto, 1998), in particular from the actor-critic approach (Sutton, 1984; Sutton et al., 1999; Barto et al., 1983). RL studies the problem of acting efï¬ ciently based only on weak supervision in the form of a reward given for some of the agentâ s actions. In our case, the reward is analogous to the task-speciï¬ c score associated with a prediction. However, the tasks we consider are those of supervised learning, and we make use of this crucial difference by allowing the critic to use the ground-truth answer as an input. In other words, the critic has access to a sequence of expert actions that are known to lead to high (or even optimal) returns. To train the critic, we adapt the temporal difference methods from the RL literature (Sutton, 1988) to our setup. While RL methods with non-linear function approximators are not new (Tesauro, 1994; Miller et al., 1995), they have recently surged in popularity, giving rise to the ï¬
1607.07086#4
1607.07086#6
1607.07086
[ "1512.02433" ]
1607.07086#6
An Actor-Critic Algorithm for Sequence Prediction
eld of â deep RLâ (Mnih et al., 2015). We show that some of the techniques recently developed in deep RL, such as having a target network, may also be beneï¬ cial for sequence prediction. The contributions of the paper can be summarized as follows: 1) we describe how RL methodology like the actor-critic approach can be applied to supervised learning problems with structured outputs; and 2) we investigate the performance and behavior of the new method on both a synthetic task and a real-world task of machine translation, demonstrating the improvements over maximum-likelihood and REINFORCE brought by the actor-critic training.
1607.07086#5
1607.07086#7
1607.07086
[ "1512.02433" ]
1607.07086#7
An Actor-Critic Algorithm for Sequence Prediction
# 2 BACKGROUND We consider the problem of learning to produce an output sequence Y = (y1, . . . , yT ), yt â A given an input X, where A is the alphabet of output tokens. We will often use notation Yf ...l to refer to subsequences of the form (yf , . . . , yl). Two sets of input-output pairs (X, Y ) are assumed to be available for both training and testing. The trained predictor h is evaluated by computing the average task-speciï¬ c score R( Ë Y , Y ) on the test set, where Ë Y = h(X) is the prediction. To simplify the formulas we always use T to denote the length of an output sequence, ignoring the fact that the output sequences may have different length. Recurrent neural networks A recurrent neural network (RNN) produces a sequence of state vectors (s1, . . . , sT ) given a sequence of input vectors (e1, . . . , eT ) by starting from an initial s0 state and applying T times the transition function f : st = f (stâ
1607.07086#6
1607.07086#8
1607.07086
[ "1512.02433" ]
1607.07086#8
An Actor-Critic Algorithm for Sequence Prediction
1, et). Popular choices for the mapping f are the Long Short-Term Memory (Hochreiter & Schmidhuber, 1997) and the Gated Recurrent Units (Cho et al., 2014), the latter of which we use for our models. To build a probabilistic model for sequence generation with an RNN, one adds a stochastic output layer g (typically a softmax for discrete outputs) that generates outputs yt â A and can feed these outputs back by replacing them with their embedding e(yt): yt â ¼ g(stâ 1) st = f (stâ 1, e(yt)). (1) (2) Thus, the RNN deï¬ nes a probability distribution p(yt|y1, . . . , ytâ 1) of the next output token yt given the previous tokens (y1, . . . , ytâ
1607.07086#7
1607.07086#9
1607.07086
[ "1512.02433" ]
1607.07086#9
An Actor-Critic Algorithm for Sequence Prediction
1). Upon adding a special end-of-sequence token â to the alphabet A, the RNN can deï¬ ne the distribution p(Y ) over all possible sequences as p(Y ) = p(y1)p(y2|y1) . . . p(yT |y1, . . . , yT â 1)p(â |y1, . . . , yT ). 2 Published as a conference paper at ICLR 2017 RNNs for sequence prediction To use RNNs for sequence prediction, they must be augmented to generate Y conditioned on an input X. The simplest way to do this is to start with an initial state s0 = s0(X) (Sutskever et al., 2014; Cho et al., 2014). Alternatively, one can encode X as a variable-length sequence of vectors (h1, . . . , hL) and condition the RNN on this sequence using an attention mechanism. In our models, the sequence of vectors is produced by either a bidirectional RNN (Schuster & Paliwal, 1997) or a convolutional encoder (Rush et al., 2015). We use a soft attention mechanism (Bahdanau et al., 2015) that computes a weighted sum of a sequence of vectors. The attention weights determine the relative importance of each vector. More formally, we consider the following equations for RNNs with attention:
1607.07086#8
1607.07086#10
1607.07086
[ "1512.02433" ]
1607.07086#10
An Actor-Critic Algorithm for Sequence Prediction
yt â ¼ g(stâ 1, ctâ 1) st = f (stâ 1, ctâ 1, e(yt)) αt = β(st, (h1, . . . , hL)) ye ~ 9(Stâ 1, Ce-1) ® 8. = f (St-1, Ce-1, e(M)) ® 4 = (se, (ha, ---, hx) ° L a= > Oey, hy © j=l where β is the attention mechanism that produces the attention weights αt and ct is the context vector, or â glimpseâ , for time step t.
1607.07086#9
1607.07086#11
1607.07086
[ "1512.02433" ]
1607.07086#11
An Actor-Critic Algorithm for Sequence Prediction
The attention weights are computed by an MLP that takes as input the current RNN state and each individual vector to focus on. The weights are typically (as in our work) constrained to be positive and sum to 1 by using the softmax function. A conditioned RNN can be trained for sequence prediction by gradient ascent on the log-likelihood log p(Y |X) for the input-output pairs (X, Y ) from the training set. To produce a prediction Ë Y for a test input sequence X, an approximate beam search for the maximum of p(·|X) is usually conducted. During this search the probabilities p(·|Ë y1, . . . , Ë ytâ 1) are considered, where the previous tokens Ë y1, . . . , Ë ytâ 1 comprise a candidate beginning of the prediction Ë Y . Value functions We view the conditioned RNN as a stochastic policy that generates actions and receives the task score (e.g., BLEU score) as the return. We furthermore consider the case when the return R is partially received at the intermediate steps in the form of rewards r;: RY, Y)= a re(Ges Yt; Y). This is more general than the case of receiving the full return at the end of the sequence, as we can simply define all rewards other than ry to be zero. Receiving intermediate rewards may ease the learning for the critic, and we use reward shaping as explained in Section] Given the policy, possible actions and reward function, the value represents the expected future return as a function of the current state of the system, which in our case is uniquely defined by the sequence of actions taken so far, Yi. We define the value of an unfinished prediction â
1607.07086#10
1607.07086#12
1607.07086
[ "1512.02433" ]
1607.07086#12
An Actor-Critic Algorithm for Sequence Prediction
it as follows: T VM XY) =| E Ye re (Gei Vie Â¥)- Verret X) Sy We deï¬ ne the value of a candidate next token a for an unï¬ nished prediction Ë Y1...tâ 1 as the expected future return after generating token a: T Q(a;Â¥1..1-1, X,Y) = E (neren) + > rie Ficetien.nÂ¥)) : Yeqi..r~p(.[Â¥1...t-14,X) ott
1607.07086#11
1607.07086#13
1607.07086
[ "1512.02433" ]
1607.07086#13
An Actor-Critic Algorithm for Sequence Prediction
We will refer to the candidate next tokens as actions. For notational simplicity, we henceforth drop X and Y from the signature of p, V , Q, R and rt, assuming it is clear from the context which of X and Y is meant. We will also use V without arguments for the expected reward of a random prediction. 3 (3) (4) (5) Published as a conference paper at ICLR 2017 Algorithm 1 Actor-Critic Training for Sequence Prediction Require: A critic Ë Q(a; Ë Y1...t, Y ) and an actor p(a| Ë Y1...t, X) with weights Ï and θ respectively. 1: Initialize delayed actor p 2: while Not Converged do 3: 4: 5: and target critic Qâ with same weights: 6â = 6, 6â = ¢.
1607.07086#12
1607.07086#14
1607.07086
[ "1512.02433" ]
1607.07086#14
An Actor-Critic Algorithm for Sequence Prediction
Receive a random example (X, Y ). Generate a sequence of actions Ë Y from p Compute targets for the critic . ge = Tees Via, Y) + > P (alÂ¥4..1 X)Q" (a V4.4, Y) acA Update the critic weights Ï using the gradient i 2 dé (= (QG: Y1-1,Y) â at) + rots) t=1 2 where C;, = > (Quen _ a Wel Fie) b a 7:
1607.07086#13
1607.07086#15
1607.07086
[ "1512.02433" ]
1607.07086#15
An Actor-Critic Algorithm for Sequence Prediction
Update actor weights θ using the following gradient estimate ee ey ee uaF wa) t=1aeA T dp(ye|M1...4-1, X) + LL > 70 t=1 8: Update delayed actor and target critic, with constants yg < 1, yy « 1 OY = 700+ (1â 70)0', & = b+ (1-46)¢' # 9: end while Algorithm 2 Complete Actor-Critic Algorithm for Sequence Prediction 1: Initialize critic Ë Q(a; Ë Y1...t, Y ) and actor p(a| Ë Y1...t, X) with random weights Ï and θ respectively. 2:
1607.07086#14
1607.07086#16
1607.07086
[ "1512.02433" ]
1607.07086#16
An Actor-Critic Algorithm for Sequence Prediction
Pre-train the actor to predict yt+1 given Y1...t by maximizing log p(yt+1|Y1...t, X). 3: Pre-train the critic to estimate Q by running Algorithm 1 with ï¬ xed actor. 4: Run Algorithm 1. 4 Published as a conference paper at ICLR 2017 # 3 ACTOR-CRITIC FOR SEQUENCE PREDICTION Let θ be the parameters of the conditioned RNN, which we will also refer to as the actor. Our training algorithm is based on the following way of rewriting the gradient of the expected return dV dθ : d dp( Yn A v.
1607.07086#15
1607.07086#17
1607.07086
[ "1512.02433" ]
1607.07086#17
An Actor-Critic Algorithm for Sequence Prediction
Sp cae) Pal) O(a: ¥ 4). ©) do Punthix) & 144 This equality is known in RL under the names policy gradient theorem (Sutton et al., 1999) and stochastic actor-critic (Sutton, 1984). 1 Note that we use the probability rather than the log probability in this formula (which is more typical in RL applications) as we are summing over actions rather than taking an expectation. Intuitively, this equality corresponds to increasing the probability of actions that give high values, and decreasing the probability of actions that give low values. Since this gradient expression is an expectation, it is trivial to build an unbiased estimate for it:
1607.07086#16
1607.07086#18
1607.07086
[ "1512.02433" ]
1607.07086#18
An Actor-Critic Algorithm for Sequence Prediction
an yey Maia) alt. = Qa: Yi 11) (8) k=1t=1acA where Ë Y k are M random samples from p( Ë Y ). By replacing Q with a parameteric estimate Ë Q one can obtain a biased estimate with relatively low variance. The parameteric estimate Ë Q is called the critic. The above formula is similar in spirit to the REINFORCE learning rule that Ranzato et al. (2015) use in the same context: av f Hee r(GtlVt -1) a 7 > dn EYE 1) â b(X)],, ) # a where the scalar b,(X) is called baseline or control variate. The difference is that in REINFORCE the inner sum over all actions is replaced by its 1-sample estimate, namely Peer VO (g,; Y1...t-1), where the log probability aloe rtie|---) Ce apie.) is intro- duced to correct for the sampling of y,. Furthermore, instead of the value Q(; Y1...4-1), REIN- FORCE uses the cumulative reward ean Tr(Gr3 Yi..7-1) following the action yj, which again can be seen as a 1-sample estimate of Q. Due to these simplifications and the potential high variance in the cumulative reward, the REINFORCE gradient estimator has very high variance. In order to improve upon it, we consider the actor-critic estimate from Equation[8| which has a lower variance at the cost of significant bias, since the critic is not perfect and trained simultaneously with the actor. The success depends on our ability to control the bias by designing the critic network and using an appropriate training criterion for it. To implement the critic, we propose to use a separate RNN parameterized by Ï
1607.07086#17
1607.07086#19
1607.07086
[ "1512.02433" ]
1607.07086#19
An Actor-Critic Algorithm for Sequence Prediction
. The critic RNN is run in parallel with the actor, consumes the tokens Ë yt that the actor outputs and produces the estimates Ë Q(a; Ë Y1...t) for all a â A. A key difference between the critic and the actor is that the correct answer Y is given to the critic as an input, similarly to how the actor is conditioned on X. Indeed, the return R( Ë Y , Y ) is a deterministic function of Y , and we argue that using Y to compute Ë Q should be of great help. We can do this because the values are only required during training and we do not use the critic at test time. We also experimented with providing the actor states st as additional inputs to the critic. See Figure 1 for a visual representation of our actor-critic architecture. Temporal-difference learning A crucial component of our approach is policy evaluation, that is the training of the critic to produce useful estimates of Q. With a naive Monte-Carlo method, one could use the future return yw 7 (Gri Yi.r-1) as a target to OG: Yi.t-1)s and use the critic parameters @ to minimize the square error between these two values. However, like with REINFORCE, using such a target yields to very high variance which quickly grows with the number of steps T. We use a temporal difference (TD) method for policy evaluation (Sutton) /T988). Namely, we use the right-hand side gq, = 1i(§; Yi..tâ 1) + ae P(AIM1...4)Q(4;M%...1) of the Bellman acA equation as the target for the left-hand Q( i; Y1...4-1). 1We also provide a simple self-contained proof of Equation (7) in Supplementary Material.
1607.07086#18
1607.07086#20
1607.07086
[ "1512.02433" ]
1607.07086#20
An Actor-Critic Algorithm for Sequence Prediction
5 Published as a conference paper at ICLR 2017 Actor Critic Q pe Q1,Q2,-°: ,Qr ° Decoder "SCS s«éDeecoder SG, Yas Or im actor states @1,%2,°°° XL Y1,Y2,°°* YT Figure 1: Both the actor and the critic are encoder-decoder networks. The actor receives an input sequence X and produces samples Ë Y which are evaluated by the critic. The critic takes in the ground-truth sequence Y as input to the encoder, and takes the input summary (calculated using an attention mechanism) and the actorâ s prediction Ë yt as input at time step t of the decoder. The values Q1, Q2, · · · , QT computed by the critic are used to approximate the gradient of the expected returns with respect to the parameters of the actor. This gradient is used to train the actor to optimize these expected task speciï¬ c returns (e.g., BLEU score). The critic may also receive the hidden state activations of the actor as input. Applying deep RL techniques It has been shown in the RL literature that if Q is non-linear (like in our case), the TD policy evaluation might diverge (Tsitsiklis & Van Roy} |1997). Previous work has shown that this problem can be alleviated by using an additional target network Qâ to compute 4, Which is updated less often and/or more slowly than Q. Similarly to (Lillicrap et al.||2015), we update the parameters ¢â of the target critic by linearly interpolating them with the parameters of the trained one. Attempts to remove the target network by propagating the gradient through q resulted in a lower square error (Q(g1; Yi...r) - a). but the resulting Q values proved very unreliable as training signals for the actor. The fact that both actor and critic use outputs of each other for training creates a potentially dangerous feedback loop. To address this, we sample predictions from a delayed actor (Lillicrap et al., 2015), whose weights are slowly updated to follow the actor that is actually trained. Dealing with large action spaces One of the challenges of our work is that the action space is very large (as is typically the case in NLP tasks with large vocabularies).
1607.07086#19
1607.07086#21
1607.07086
[ "1512.02433" ]
1607.07086#21
An Actor-Critic Algorithm for Sequence Prediction
This can be alleviated by putting constraints on the critic values for actions that are rarely sampled. We found experimentally that shrinking the values of these rare actions is necessary for the algorithm to converge. Speciï¬ cally, we add a term Ct for every step t to the criticâ s optimization objective which drives all value predictions of the critic closer to their mean: 2 2 a= (a Yaa) â a Yo0Â¥i 0) (10) b a This corresponds to penalizing the variance of the outputs of the critic. Without this penalty the values of rare actions can be severely overestimated, which biases the gradient estimates and can cause divergence. A similar trick was used in the context of learning simple algorithms with Q-learning (Zaremba et al., 2015). Reward shaping While we are ultimately interested in the maximization of the score of a complete prediction, simply awarding this score at the last step provides a very sparse training signal for the critic. For this reason we use potential-based reward shaping with potentials Φ( Ë Y1...t) = R( Ë Y1...t) for incomplete sequences and Φ( Ë Y ) = 0 for complete ones (Ng et al., 1999). Namely, for a predicted sequence Ë Y we compute score values for all preï¬ xes to obtain the sequence of scores (R( Ë Y1...1), R( Ë Y1...2), . . . , R( Ë Y1...T )). The difference between the consecutive pairs of scores is then used as the reward at each step: rt(Ë yt; Ë Y1...tâ 1) = R( Ë Y1...t) â R( Ë Y1...tâ 1).
1607.07086#20
1607.07086#22
1607.07086
[ "1512.02433" ]
1607.07086#22
An Actor-Critic Algorithm for Sequence Prediction
Using the shaped reward rt instead of awarding the whole score R at the last step does not change the optimal policy (Ng et al., 1999). Putting it all together Algorithm 1 describes the proposed method in detail. We consider adding the weighted log-likelihood gradient to the actorâ s gradient estimate. This is in line with the prior work 6 Published as a conference paper at ICLR 2017 by (Ranzato et al., 2015) and (Shen et al., 2015). It is also motivated by our preliminary experiments that showed that using the actor-critic estimate alone can lead to an early determinization of the policy and vanishing gradients (also discussed in Section 6). Starting training with a randomly initialized actor and critic would be problematic, because neither the actor nor the critic would provide adequate training signals for one another. The actor would sample completely random predictions that receive very little reward, thus providing a very weak training signal for the critic. A random critic would be similarly useless for training the actor. Motivated by these considerations, we pre-train the actor using standard log-likelihood training. Furthermore, we pre-train the critic by feeding it samples from the pre-trained actor, while the actorâ
1607.07086#21
1607.07086#23
1607.07086
[ "1512.02433" ]
1607.07086#23
An Actor-Critic Algorithm for Sequence Prediction
s parameters are frozen. The complete training procedure including pre-training is described by Algorithm 2. # 4 RELATED WORK In other recent RL-inspired work on sequence prediction, Ranzato et al. (2015) trained a translation model by gradually transitioning from maximum likelihood learning into optimizing BLEU or ROUGE scores using the REINFORCE algorithm. However, REINFORCE is known to have very high variance and does not exploit the availability of the ground-truth like the critic network does. The approach also relies on a curriculum learning scheme. Standard value-based RL algorithms like SARSA and OLPOMDP have also been applied to structured prediction (Maes et al., 2009). Again, these systems do not use the ground-truth for value prediction. Imitation learning has also been applied to structured prediction (Vlachos, 2012). Methods of this type include the SEARN (Daum´e Iii et al., 2009) and DAGGER (Ross et al., 2010) algorithms. These methods rely on an expert policy to provide action sequences that the policy learns to imitate.
1607.07086#22
1607.07086#24
1607.07086
[ "1512.02433" ]
1607.07086#24
An Actor-Critic Algorithm for Sequence Prediction
Unfortunately, itâ s not always easy or even possible to construct an expert policy for a task-speciï¬ c score. In our approach, the critic plays a role that is similar to the expert policy, but is learned without requiring prior knowledge about the task-speciï¬ c score. The recently proposed â scheduled samplingâ (Bengio et al., 2015) can also be seen as imitation learning. In this method, ground-truth tokens are occasionally replaced by samples from the model itself during training. A limitation is that the token k for the ground-truth answer is used as the target at step k, which might not always be the optimal strategy. There are also approaches that aim to approximate the gradient of the expected score. One such approach is â Direct Loss Minimizationâ (Hazan et al., 2010) in which the inference procedure is adapted to take both the model likelihood and task-speciï¬ c score into account. Another popular approach is to replace the domain over which the task score expectation is deï¬ ned with a small subset of it, as is done in Minimum (Bayes) Risk Training (Goel & Byrne, 2000; Shen et al., 2015; Och, 2003). This small subset is typically an n-best list or a sample (like in REINFORCE) that may or may not include the ground-truth as well. None of these methods provide intermediate targets for the actor during training, and Shen et al. (2015) report that as many as 100 samples were required for the best results. Another recently proposed method is to optimize a global sequence cost with respect to the selection and pruning behavior of the beam search procedure itself (Wiseman & Rush, 2016). This method follows the more general strategy called â learning as search optimizationâ (Daum´e III & Marcu, 2005). This is an interesting alternative to our approach; however, it is designed speciï¬ cally for the precise inference procedure involved. # 5 EXPERIMENTS To validate our approach, we performed two sets of experiments 2. First, we trained the proposed model to recover strings of natural text from their corrupted versions.
1607.07086#23
1607.07086#25
1607.07086
[ "1512.02433" ]
1607.07086#25
An Actor-Critic Algorithm for Sequence Prediction
Speciï¬ cally, we consider each character in a natural language corpus and with some probability replace it with a random character. We call this synthetic task spelling correction. A desirable property of this synthetic task is that data is essentially inï¬ nite and overï¬ tting is no concern. Our second series of experiments is done on the task of automatic machine translation using different models and datasets. 2 The source code is available at https://github.com/rizar/actor-critic-public 7 Published as a conference paper at ICLR 2017 In addition to maximum likelihood and actor-critic training we implemented two versions of the REINFORCE gradient estimator.
1607.07086#24
1607.07086#26
1607.07086
[ "1512.02433" ]
1607.07086#26
An Actor-Critic Algorithm for Sequence Prediction
In the ï¬ rst version, we use a linear baseline network that takes the actor states as input, exactly as in (Ranzato et al., 2015). We also propose a novel extension of REINFORCE that leverages the extra information available in the ground-truth output Y . Speciï¬ cally, we use the Ë Q estimates produced by the critic network as the baseline for the REINFORCE algorithm. The motivation behind this approach is that using the ground-truth output should produce a better baseline that lowers the variance of REINFORCE, resulting in higher task-speciï¬ c scores. We refer to this method as REINFORCE-critic. 5.1 SPELLING CORRECTION We use text from the One Billion Word dataset for the spelling correction task (Chelba et al., 2013), which has pre-deï¬ ned training and testing sets. The training data was abundant, and we never used any example twice. We evaluate trained models on a section of the test data that comprises 6075 sentences. To speed up experiments, we clipped all sentences to the ï¬ rst 10 or 30 characters. For the spelling correction actor network, we use an RNN with 100 Gated Recurrent Units (GRU) and a bidirectional GRU network for the encoder. We use the same attention mechanism as proposed in (Bahdanau et al., 2015), which effectively makes our actor network a smaller version of the model used in that work. For the critic network, we employed a model with the same architecture as the actor. We use character error rate (CER) to measure performance on the spelling task, which we deï¬ ne as the ratio between the total of Levenshtein distances between predictions and ground-truth outputs and the total length of the ground-truth outputs. This is a corpus-level metric for which a lower value is better. We use it as the return by negating per-sentence ratios.
1607.07086#25
1607.07086#27
1607.07086
[ "1512.02433" ]
1607.07086#27
An Actor-Critic Algorithm for Sequence Prediction
At the evaluation time greedy search is used to extract predictions from the model. We use the ADAM optimizer (Kingma & Ba, 2015) to train all the networks with the parame- ters recommended in the original paper, with the exception of the scale parameter α. The latter is ï¬ rst set to 10â 3 and then annealed to 10â 4 for log-likelihood training. For the pre-training stage of the actor-critic, we use α = 10â 3 and decrease it to 10â 4 for the joint actor-critic train- ing. We pretrain the actor until its score on the development set stops improving. We pretrain the critic until its TD error stabilizes3. We used M = 1 sample for both actor-critic and REIN- FORCE. For exact hyperparameter settings we refer the reader to Appendix A. We start REINFORCE training from a pretrained actor, but we do not use the curriculum learning employed in MIXER. The critic is trained in the same way for both REINFORCE and actor- critic, including the pretraining stage. We re- port results obtained with the reward shaping de- scribed in Section 3, as we found that it slightly improves REINFORCE performance. Table 1 presents our results on the spelling cor- rection task. We observe an improvement in CER over log-likelihood training for all four settings considered. Without simultaneous log- likelihood training, actor-critic training results in a better CER than REINFORCE-critic in three
1607.07086#26
1607.07086#28
1607.07086
[ "1512.02433" ]
1607.07086#28
An Actor-Critic Algorithm for Sequence Prediction
40 â Lt valid â Ut yalid~ 3 a valtd . â RFC valid -- LL train sok 1 . Ss" LE trains => AC train 15} Epochs == RF train =-- RF-C valid BLEU 25 20 Figure 2: Progress of log-likelihood (LL), RE- INFORCE (RF) and actor-critic (AC) training in terms of BLEU score on the training (train) and val- idation (valid) datasets. LL* stands for the anneal- ing phase of log-likelihood training. The curves start from the epoch of log-likelihood pretraining from which the parameters were initialized. 3A typical behaviour for TD error was to grow at ï¬ rst and then start decreasing slowly. We found that stopping pretraining shortly after TD error stops growing leads to good results.
1607.07086#27
1607.07086#29
1607.07086
[ "1512.02433" ]
1607.07086#29
An Actor-Critic Algorithm for Sequence Prediction
8 Published as a conference paper at ICLR 2017 Table 1: Character error rate of different methods on the spelling correction task. In the table L is the length of input strings, η is the probability of replacing a character with a random one. LL stands for the log-likelihood training, AC and RF-C and for the actor-critic and the REINFORCE-critic respectively, AC+LL and RF-C+LL for the combinations of AC and RF-C with LL. Character Error Rate AC LL 17.24 17.81 17.31 18.4 35.89 38.12 37.0 40.87 L = 10, η = 0.3 L = 30, η = 0.3 L = 10, η = 0.5 L = 30, η = 0.5 RF-C AC+LL RF-C+LL 17.82 18.16 35.84 37.6 16.65 17.1 34.6 36.36 16.97 17.47 35 36.6 Table 2: Our IWSLT 2014 machine translation results with a convolutional encoder compared to the previous work by Ranzato et al.
1607.07086#28
1607.07086#30
1607.07086
[ "1512.02433" ]
1607.07086#30
An Actor-Critic Algorithm for Sequence Prediction
Please see 1 for an explanation of abbreviations. The asterisk identiï¬ es results from (Ranzato et al., 2015). The numbers reported with â ¤ were approximately read from Figure 6 of (Ranzato et al., 2015) Decoding method greedy search beam search LL* MIXER* 17.74 â ¤ 20.3 20.73 â ¤ 21.9 RF 20.92 21.35 RF-C 22.24 22.58 AC 21.66 22.45 out of four settings. In the fourth case, actor-critic and REINFORCE-critic have similar performance.
1607.07086#29
1607.07086#31
1607.07086
[ "1512.02433" ]
1607.07086#31
An Actor-Critic Algorithm for Sequence Prediction
Adding the log-likelihood gradient with a cofï¬ cient λLL = 0.1 helps both of the methods, but actor-critic still retains a margin of improvement over REINFORCE-critic. 5.2 MACHINE TRANSLATION For our ï¬ rst translation experiment, we use data from the German-English machine translation track of the IWSLT 2014 evaluation campaign (Cettolo et al., 2014), as used in Ranzato et al. (2015), and closely follow the pre-processing described in that work. The training data comprises about 153,000 German-English sentence pairs. In addition we considered a larger WMT14 English-French dataset Cho et al. (2014) with more than 12 million examples. For further information about the data we refer the reader to Appendix B.
1607.07086#30
1607.07086#32
1607.07086
[ "1512.02433" ]
1607.07086#32
An Actor-Critic Algorithm for Sequence Prediction
The return is deï¬ ned as a smoothed and rescaled version of the BLEU score. Speciï¬ cally, we start all n-gram counts from 1 instead of 0, and multiply the resulting score by the length of the ground-truth translation. Smoothing is a common practice when sentence-level BLEU score is considered, and it has been used to apply REINFORCE in similar settings (Ranzato et al., 2015). IWSLT 2014 with a convolutional encoder In our ï¬ rst experiment we use a convolutional encoder in the actor to make our results more comparable with Ranzato et al. (2015). For the same reason, we use 256 hidden units in the networks. For the critic, we replaced the convolutional network with a bidirectional GRU network. For training this model we mostly used the same hyperparameter values as in the spelling correction experiments, with a few differences highlighted in Appendix A. For decoding we used greedy search and beam search with a beam size of 10. We found that penalizing candidate sentences that are too short was required to obtain the best results. Similarly to (Hannun et al., 2014), we subtracted Ï T from the negative log-likelihood of each candidate sentence, where T is the candidateâ s length, and Ï
1607.07086#31
1607.07086#33
1607.07086
[ "1512.02433" ]
1607.07086#33
An Actor-Critic Algorithm for Sequence Prediction
is a hyperparameter tuned on the validation set. The results are summarized in Table 2. We report a signiï¬ cant improvement of 2.3 BLEU points over the log-likelihood baseline when greedy search is used for decoding. Surprisingly, the best performing method is REINFORCE with critic, with an additional 0.6 BLEU point advantage over the actor-critic. When beam-search is used, the ranking of the compared approaches is the same, but the margin between the proposed methods and log-likelihood training becomes smaller.
1607.07086#32
1607.07086#34
1607.07086
[ "1512.02433" ]
1607.07086#34
An Actor-Critic Algorithm for Sequence Prediction
The ï¬ nal performances of the actor-critic and the REINFORCE-critic with greedy search are also 0.7 and 1.3 BLEU points respectively better than what Ranzato et al. (2015) report for their MIXER approach. This comparison should be treated with caution, because our log-likelihood baseline is 1.6 BLEU 9 Published as a conference paper at ICLR 2017 Table 3: Our IWSLT 2014 machine translation results with a bidirectional recurrent encoder compared to the previous work. Please see Table 1 for an explanation of abbreviations. The asterisk identiï¬
1607.07086#33
1607.07086#35
1607.07086
[ "1512.02433" ]
1607.07086#35
An Actor-Critic Algorithm for Sequence Prediction
es results from (Wiseman & Rush, 2016). # Model greedy search beam search LL* 22.53 23.87 BSO* 23.83 25.48 LL 25.82 27.56 RF-C RF-C+LL 27.42 27.75 27.7 28.3 AC 27.27 27.75 AC+LL 27.49 28.53 Table 4: Our WMT 14 machine translation results compared to the previous work. Please see Table 1 for an explanation of abbreviations. The apostrophy and the asterisk identify results from (Bahdanau et al., 2015) and (Shen et al., 2015) respectively. Decoding method greedy search beam search LLâ n/a 28.45 LL* MRT * n/a 29.88 n/a 31.3 Model LL 29.33 30.71 AC+LL RF-C+LL 30.85 31.13 29.83 30.37
1607.07086#34
1607.07086#36
1607.07086
[ "1512.02433" ]
1607.07086#36
An Actor-Critic Algorithm for Sequence Prediction
points stronger than its equivalent from (Ranzato et al., 2015). The performance of REINFORCE with a simple baseline matches the score reported for MIXER in Ranzato et al. (2015). To better understand the IWSLT 2014 results we provide the learning curves for the considered approaches in Figure 2. We can clearly see that the training methods that use generated predictions have a strong regularization effect â that is, better progress on the validation set in exchange for slower or negative progress on the training set. The effect is stronger for both REINFORCE varieties, especially for the one without a critic. The actor-critic training does a much better job of ï¬ tting the training set than REINFORCE and is the only method except log-likelihood that shows a clear overï¬ tting, which is a healthy behaviour for such a small dataset. In addition, we performed an ablation study. We found that using a target network was crucial; while the joint actor-critic training was still progressing with γθ = 0.1, with γθ = 1.0 it did not work at all. Similarly important was the value penalty described in Equation (10). We found that good values of the λ coefï¬ cient were in the range [10â 3, 10â 6]. Other techniques, such as reward shaping and a delayed actor, brought moderate performance gains. We refer the reader to Appendix A for more details. IWSLT 2014 with a bidirectional GRU encoder In order to compare our results with those reported by Wiseman & Rush (2016) we repeated our IWSLT 2014 investigation with a different encoder, a bidirectional RNN with 256 GRU units. In this round of experiments we also tried to used combined training objectives in the same way as in our spelling correction experiments. The results are summarized in Table 3. One can see that the actor-critic training, especially its AC+LL version, yields signiï¬ cant improvements (1.7 with greedy search and 1.0 with beam search) upon the pure log-likelihood training, which are comparable to those brought by Beam Search Optimization (BSO), even though our log-likelihood baseline is much stronger. In this round of experiments actor-critic and REINFORCE-critic performed on par.
1607.07086#35
1607.07086#37
1607.07086
[ "1512.02433" ]
1607.07086#37
An Actor-Critic Algorithm for Sequence Prediction
WMT 14 Finally we report our results on a very popular large WMT14 English-French dataset (Cho et al., 2014) in Table 4. Our model closely follows the achitecture from (Bahdanau et al., 2015), however we achieved a higher baseline performance by annealing the learning rate α and penalizing output sequences that were too short during beam search. The actor-critic training brings a signiï¬ cant 1.5 BLEU improvement with greedy search and a noticeable 0.4 BLEU improvement with beam search. In previous work Shen et al. (2015) report a higher improvement of 1.4 BLEU with beam search, however they use 100 samples for each training example, whereas we use just one. We note that in this experiment, which is perhaps the most realistic settings, the actor-critic enjoys a signiï¬ cant advantage over the REINFORCE-critic.
1607.07086#36
1607.07086#38
1607.07086
[ "1512.02433" ]
1607.07086#38
An Actor-Critic Algorithm for Sequence Prediction
10 Published as a conference paper at ICLR 2017 # 6 DISCUSSION We proposed an actor-critic approach to sequence prediction. Our method takes the task objective into account during training and uses the ground-truth output to aid the critic in its prediction of intermediate targets for the actor. We showed that our method leads to signiï¬ cant improvements over maximum likelihood training on both a synthetic task and a machine translation benchmark. Compared to REINFORCE training on machine translation, actor-critic ï¬ ts the training data much faster, although in some of our experiments we were able to signiï¬ cantly reduce the gap in the training speed and achieve a better test error using our critic network as the baseline for REINFORCE. One interesting observation we made from the machine translation results is that the training methods that use generated predictions have a strong regularization effect. Our understanding is that condi- tioning on the sampled outputs effectively increases the diversity of training data. This phenomenon makes it harder to judge whether the actor-critic training meets our expectations, because a noisier gradient estimate yielded a better test set performance. We argue that the spelling correction results obtained on a virtually inï¬ nite dataset in conjuction with better machine translation performance on the large WMT 14 dataset provide convincing evidence that the actor-training can be effective. In future work we will consider larger machine translation datasets. We ran into several optimization issues.
1607.07086#37
1607.07086#39
1607.07086
[ "1512.02433" ]
1607.07086#39
An Actor-Critic Algorithm for Sequence Prediction
The critic would sometimes assign very high values to actions with a very low probability according to the actor. We were able to resolve this by penalizing the criticâ s variance. Additionally, the actor would sometimes have trouble to adapt to the demands of the critic. We noticed that the action distribution tends to saturate and become deterministic, causing the gradient to vanish. We found that combining an RL training objective with log-likelihood can help, but in general we think this issue deserves further investigation. For example, one can look for suitable training criteria that have a well-behaved gradient even when the policy has little or no stochasticity. In a concurrent work Wu et al. (2016) show that a version of REINFORCE with the baseline computed using multiple samples can improve performance of a very strong machine translation system. This result, and our REINFORCE-critic experiments, suggest that often the variance of REINFORCE can be reduced enough to make its application practical. That said, we would like to emphasize that this paper attacks the problem of gradient estimation from a very different angle as it aims for low-variance but potentially high-bias estimates. The idea of using the ground-truth output that we proposed is an absolutely necessary ï¬ rst step in this direction. Future work could focus on further reducing the bias of the actor-critic estimate, for example, by using a multi-sample training criterion for the critic. # ACKNOWLEDGMENTS We thank the developers of Theano (Theano Development Team, 2016) and Blocks (van Merri¨enboer et al., 2015) for their great work. We thank NSERC, Compute Canada, Calcul Queb´ec, Canada Research Chairs, CIFAR, CHISTERA project M2CR (PCIN-2015-226) and Samsung Institute of Advanced Techonology for their ï¬ nancial support. # REFERENCES
1607.07086#38
1607.07086#40
1607.07086
[ "1512.02433" ]
1607.07086#40
An Actor-Critic Algorithm for Sequence Prediction
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Proceedings of the ICLR 2015, 2015. Andrew G Barto, Richard S Sutton, and Charles W Anderson. Neuronlike adaptive elements that can solve difï¬ cult learning control problems. Systems, Man and Cybernetics, IEEE Transactions on, (5):834â 846, 1983. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. arXiv preprint arXiv:1506.03099, 2015. Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, and Marcello Federico.
1607.07086#39
1607.07086#41
1607.07086
[ "1512.02433" ]
1607.07086#41
An Actor-Critic Algorithm for Sequence Prediction
Report on the 11th iwslt evaluation campaign. In Proc. of IWSLT, 2014. William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. Listen, attend and spell. arXiv preprint arXiv:1508.01211, 2015. 11 Published as a conference paper at ICLR 2017 Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, KyungHyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. CoRR, abs/1506.07503, 2015. URL http: //arxiv.org/abs/1506.07503. Hal Daum´e III and Daniel Marcu. Learning as search optimization: Approximate large margin methods for structured prediction. In Proceedings of the 22nd international conference on Machine learning, pp. 169â 176. ACM, 2005. Hal Daum´e Iii, John Langford, and Daniel Marcu. Search-based structured prediction. Machine learning, 75(3):297â 325, 2009. Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venu- gopalan, Kate Saenko, and Trevor Darrell.
1607.07086#40
1607.07086#42
1607.07086
[ "1512.02433" ]
1607.07086#42
An Actor-Critic Algorithm for Sequence Prediction
Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2625â 2634, 2015. Vaibhava Goel and William J Byrne. Minimum bayes-risk automatic speech recognition. Computer Speech & Language, 14(2):115â 135, 2000. Awni Y Hannun, Andrew L Maas, Daniel Jurafsky, and Andrew Y Ng. First-pass large vocabulary continuous speech recognition using bi-directional recurrent dnns. arXiv preprint arXiv:1408.2873, 2014. Tamir Hazan, Joseph Keshet, and David A McAllester. Direct loss minimization for structured prediction. In Advances in Neural Information Processing Systems, pp. 1594â
1607.07086#41
1607.07086#43
1607.07086
[ "1512.02433" ]
1607.07086#43
An Actor-Critic Algorithm for Sequence Prediction
1602, 2010. Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â 1780, 1997. Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3128â 3137, 2015. Diederik P Kingma and Jimmy Ba. A method for stochastic optimization. In International Conference on Learning Representation, 2015. Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. Chin-Yew Lin and Eduard Hovy. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pp. 71â
1607.07086#42
1607.07086#44
1607.07086
[ "1512.02433" ]
1607.07086#44
An Actor-Critic Algorithm for Sequence Prediction
78. Association for Computational Linguistics, 2003. Francis Maes, Ludovic Denoyer, and Patrick Gallinari. Structured prediction with reinforcement learning. Machine learning, 77(2-3):271â 301, 2009. W Thomas Miller, Paul J Werbos, and Richard S Sutton. Neural networks for control. MIT press, 1995. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning.
1607.07086#43
1607.07086#45
1607.07086
[ "1512.02433" ]
1607.07086#45
An Actor-Critic Algorithm for Sequence Prediction
Nature, 518(7540):529â 533, 2015. 12 Published as a conference paper at ICLR 2017 Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pp. 278â 287, 1999. Franz Josef Och. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pp. 160â
1607.07086#44
1607.07086#46
1607.07086
[ "1512.02433" ]
1607.07086#46
An Actor-Critic Algorithm for Sequence Prediction
167. Association for Computational Linguistics, 2003. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311â 318. Association for Computational Linguistics, 2002. Marcâ Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba.
1607.07086#45
1607.07086#47
1607.07086
[ "1512.02433" ]
1607.07086#47
An Actor-Critic Algorithm for Sequence Prediction
Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015. St´ephane Ross, Geoffrey J Gordon, and J Andrew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. arXiv preprint arXiv:1011.0686, 2010. Alexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685, 2015. Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. Signal Processing, IEEE Transactions on, 45(11):2673â 2681, 1997.
1607.07086#46
1607.07086#48
1607.07086
[ "1512.02433" ]
1607.07086#48
An Actor-Critic Algorithm for Sequence Prediction
Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. Minimum risk training for neural machine translation. arXiv preprint arXiv:1512.02433, 2015. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pp. 3104â 3112, 2014.
1607.07086#47
1607.07086#49
1607.07086
[ "1512.02433" ]
1607.07086#49
An Actor-Critic Algorithm for Sequence Prediction
Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3 (1):9â 44, 1988. Richard S Sutton and Andrew G Barto. Introduction to reinforcement learning, volume 135. MIT Press Cambridge, 1998. Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057â 1063, 1999. Richard Stuart Sutton. Temporal credit assignment in reinforcement learning. 1984. Gerald Tesauro. Td-gammon, a self-teaching backgammon program, achieves master-level play. Neural computation, 6(2):215â 219, 1994.
1607.07086#48
1607.07086#50
1607.07086
[ "1512.02433" ]
1607.07086#50
An Actor-Critic Algorithm for Sequence Prediction
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688. John N Tsitsiklis and Benjamin Van Roy. An analysis of temporal-difference learning with function approximation. Automatic Control, IEEE Transactions on, 42(5):674â 690, 1997. Bart van Merri¨enboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde- Farley, Jan Chorowski, and Yoshua Bengio.
1607.07086#49
1607.07086#51
1607.07086
[ "1512.02433" ]
1607.07086#51
An Actor-Critic Algorithm for Sequence Prediction
Blocks and fuel: Frameworks for deep learning. arXiv:1506.00619 [cs, stat], June 2015. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3156â 3164, 2015. Andreas Vlachos. An investigation of imitation learning algorithms for structured prediction.
1607.07086#50
1607.07086#52
1607.07086
[ "1512.02433" ]
1607.07086#52
An Actor-Critic Algorithm for Sequence Prediction
In EWRL, pp. 143â 154. Citeseer, 2012. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â 256, 1992. 13 Published as a conference paper at ICLR 2017 Sam Wiseman and Alexander M Rush. Sequence-to-sequence learning as beam-search optimization. arXiv preprint arXiv:1606.02960, 2016. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al.
1607.07086#51
1607.07086#53
1607.07086
[ "1512.02433" ]
1607.07086#53
An Actor-Critic Algorithm for Sequence Prediction
Googleâ s neural machine translation sys- tem: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 2048â 2057, 2015. Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from examples. arXiv preprint arXiv:1511.07275, 2015.
1607.07086#52
1607.07086#54
1607.07086
[ "1512.02433" ]
1607.07086#54
An Actor-Critic Algorithm for Sequence Prediction
14 Published as a conference paper at ICLR 2017 Table 5: Results of an ablation study. We tried varying the actor update speed γθ, the critic update speed Î³Ï , the value penalty coefï¬ cient λ, whether or not reward shaping is used, whether or not temporal difference (TD) learning is used for the critic. Reported are the best training and validation BLEU score obtained in the course of the ï¬ rst 10 training epochs. Some of the validation scores would still improve with longer training.
1607.07086#53
1607.07086#55
1607.07086
[ "1512.02433" ]
1607.07086#55
An Actor-Critic Algorithm for Sequence Prediction
Greedy search was used for decoding. 0.001 0.001 10â 3 baseline yes yes 33.73 23.16 with different Î³Ï 0.001 0.001 0.001 0.01 0.1 1 10â 3 10â 3 10â 3 yes yes yes yes yes yes 33.52 32.63 9.59 23.03 22.80 8.14 with different γθ 1 0.001 10â 3 yes yes 32.9 22.88 without reward shaping 0.001 0.001 10â 3 no yes 32.74 22.61 without temporal difference learning 0.001 0.001 10â 3 yes no 23.2 16.36 with different λ 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 3 â 10â 3 10â 4 10â 6 10â 8 0 yes yes yes yes yes yes yes yes yes yes 32.4 34.10 35.00 33.6 27.41 22.48 23.15 23.10 22.72 20.55 # A HYPERPARAMETERS For machine translation experiments the variance penalty coefï¬ cient λ was set to 10â 4, and the delay coefï¬ cients γθ and Î³Ï were both set to 10â 4. For REINFORCE with the critic we did not use a delayed actor, i.e. γθ was set to 1. For the spelling correction task we used the same γθ and Î³Ï but a different λ = 10â 3. When we used a combined training criterion, the weight of the log-likelihood gradient λLL was always 0.1. All initial weights were sampled from a centered uniform distribution with width 0.1. In some of our experiments we provided the actor states as additional inputs to the critic.
1607.07086#54
1607.07086#56
1607.07086
[ "1512.02433" ]
1607.07086#56
An Actor-Critic Algorithm for Sequence Prediction
Speciï¬ cally, we did so in our spelling correction experiments and in our WMT 14 machine translation study. All the other results were obtained without this technique. For decoding with beam search we substracted the length of a candidate times Ï from the log- likelihood cost. The exact value of Ï was selected on the validation set and was equal to 0.8 for models trained by log-likelihood and REINFORCE and to 1.0 for models trained by actor-critic and REINFORCE-critic. For some of the hyperparameters we performed an ablation study. The results are reported in Table 5. # B DATA For the IWSLT 2014 data the sizes of validation and tests set were 6,969 and 6,750, respectively. We limited the number of words in the English and German vocabularies to the 22,822 and 32,009 most frequent words, respectively, and replaced all other words with a special token. The maximum sentence length in our dataset was 50. For WMT14 we used vocabularies of 30,000 words for both English and French, and the maximum sentence length was also 50.
1607.07086#55
1607.07086#57
1607.07086
[ "1512.02433" ]
1607.07086#57
An Actor-Critic Algorithm for Sequence Prediction
15 Published as a conference paper at ICLR 2017 # C GENERATED Q-VALUES In Table C we provide an example of value predictions that the critic outputs for candidate next words. One can see that the critic has indeed learnt to assign larger values for the appropriate next words. While the critic does not always produce sensible estimates and can often predict a high return for irrelevant rare words, this is greatly reduced using the variance penalty term from Equation (10). Figure 3: The best 3 words according to the critic at intermediate steps of generating a translation. The numbers in parentheses are the value predictions Ë
1607.07086#56
1607.07086#58
1607.07086
[ "1512.02433" ]
1607.07086#58
An Actor-Critic Algorithm for Sequence Prediction
Q. The German original is â ¨uber eine davon will ich hier erz¨ahlen .â The reference translation is â and thereâ s one I want to talk aboutâ . Words with largest Ë Q and(6.623) there(6.200) but(5.967) that(6.197) one(5.668) &apos;s(5.467) that(5.408) one(5.118) i(5.002) that(4.796) i(4.629) ,(4.139) want(5.008) i(4.160) &apos;t(3.361) to(4.729) want(3.497) going(3.396) talk(3.717) you(2.407) to(2.133) about(1.209) that(0.989) talk(0.924) about(0.706) .(0.660) right(0.653) .(0.498) ?(0.291) â (0.285) .(0.195) there(0.175) know(0.087) .(0.168) â (-0.093) ?(-0.173)
1607.07086#57
1607.07086#59
1607.07086
[ "1512.02433" ]
1607.07086#59
An Actor-Critic Algorithm for Sequence Prediction
16 Published as a conference paper at ICLR 2017 # D PROOF OF EQUATION (7) ave d , W = yey, RY) = ar [p(1)P(Gal) -.-PGr|ti ---Grâ 1)| RW) = STP pn) POD (5 oh JRO) = de t=1 y oS T HAY 1..t- ~ De PF e-1) PO y(Figacr iF) ros Fie) = 14. Your T=1 dp(Gr\Â¥1.t-1) 4-1) (Yj SY wv yy eee lye T rie; Y1..t-1) + Ss P(Yia1.71M1...t) Ss r+ (Gr3Vi.r-1) Yi Tv r=t4+1 SS E > PAM) OG: % 44) = fay V1. 1-1 ~P(Â¥s...2-1) 2A do E yy lel) wl) Q(a Y1..1-1) Yrp(Â¥) t=1aeA # T t=1 17
1607.07086#58
1607.07086
[ "1512.02433" ]
1607.06450#0
Layer Normalization
6 1 0 2 l u J 1 2 ] L M . t a t s [ 1 v 0 5 4 6 0 . 7 0 6 1 : v i X r a # Layer Normalization # Jimmy Lei Ba University of Toronto [email protected] Jamie Ryan Kiros University of Toronto [email protected] Geoffrey E. Hinton University of Toronto and Google Inc. [email protected] # Abstract Training state-of-the-art, deep neural networks is computationally expensive. One way to reduce the training time is to normalize the activities of the neurons. A recently introduced technique called batch normalization uses the distribution of the summed input to a neuron over a mini-batch of training cases to compute a mean and variance which are then used to normalize the summed input to that neuron on each training case.
1607.06450#1
1607.06450
[ "1605.02688" ]
1607.06450#1
Layer Normalization
This signiï¬ cantly reduces the training time in feed- forward neural networks. However, the effect of batch normalization is dependent on the mini-batch size and it is not obvious how to apply it to recurrent neural net- works. In this paper, we transpose batch normalization into layer normalization by computing the mean and variance used for normalization from all of the summed inputs to the neurons in a layer on a single training case. Like batch normalization, we also give each neuron its own adaptive bias and gain which are applied after the normalization but before the non-linearity. Unlike batch normalization, layer normalization performs exactly the same computation at training and test times. It is also straightforward to apply to recurrent neural networks by computing the normalization statistics separately at each time step. Layer normalization is very effective at stabilizing the hidden state dynamics in recurrent networks. Empiri- cally, we show that layer normalization can substantially reduce the training time compared with previously published techniques.
1607.06450#0
1607.06450#2
1607.06450
[ "1605.02688" ]
1607.06450#2
Layer Normalization
# 1 Introduction Deep neural networks trained with some version of Stochastic Gradient Descent have been shown to substantially outperform previous approaches on various supervised learning tasks in computer vision [Krizhevsky et al., 2012] and speech processing [Hinton et al., 2012]. But state-of-the-art deep neural networks often require many days of training. It is possible to speed-up the learning by computing gradients for different subsets of the training cases on different machines or splitting the neural network itself over many machines [Dean et al., 2012], but this can require a lot of com- munication and complex software. It also tends to lead to rapidly diminishing returns as the degree of parallelization increases. An orthogonal approach is to modify the computations performed in the forward pass of the neural net to make learning easier. Recently, batch normalization [Ioffe and Szegedy, 2015] has been proposed to reduce training time by including additional normalization stages in deep neural networks. The normalization standardizes each summed input using its mean and its standard deviation across the training data. Feedforward neural networks trained using batch normalization converge faster even with simple SGD. In addition to training time improvement, the stochasticity from the batch statistics serves as a regularizer during training. Despite its simplicity, batch normalization requires running averages of the summed input statis- tics. In feed-forward networks with ï¬ xed depth, it is straightforward to store the statistics separately for each hidden layer. However, the summed inputs to the recurrent neurons in a recurrent neu- ral network (RNN) often vary with the length of the sequence so applying batch normalization to RNNs appears to require different statistics for different time-steps. Furthermore, batch normaliza- tion cannot be applied to online learning tasks or to extremely large distributed models where the minibatches have to be small. This paper introduces layer normalization, a simple normalization method to improve the training speed for various neural network models. Unlike batch normalization, the proposed method directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. We show that layer normalization works well for RNNs and improves both the training time and the generalization performance of several existing RNN models.
1607.06450#1
1607.06450#3
1607.06450
[ "1605.02688" ]
1607.06450#3
Layer Normalization
# 2 Background A feed-forward neural network is a non-linear mapping from a input pattern x to an output vector y. Consider the lth hidden layer in a deep feed-forward, neural network, and let al be the vector representation of the summed inputs to the neurons in that layer. The summed inputs are computed through a linear projection with the weight matrix W l and the bottom-up inputs hl given as follows: # T (1) i is the incoming weights to the ith hidden i is the scalar bias parameter. The parameters in the neural network are learnt using where f (·) is an element-wise non-linear function and wl units and bl gradient-based optimization algorithms with the gradients being computed by back-propagation. One of the challenges of deep learning is that the gradients with respect to the weights in one layer are highly dependent on the outputs of the neurons in the previous layer especially if these outputs change in a highly correlated way. Batch normalization [Ioffe and Szegedy, 2015] was proposed to reduce such undesirable â covariate shiftâ . The method normalizes the summed inputs to each hidden unit over the training cases. Speciï¬ cally, for the ith summed input in the lth layer, the batch normalization method rescales the summed inputs according to their variances under the distribution of the data 1_ 9 l l l l 2 a= Se (aâ m) w= Bla] ot VB [lat â a)"] 2) i is normalized summed inputs to the ith hidden unit in the lth layer and gi is a gain parame- where ¯al ter scaling the normalized activation before the non-linear activation function. Note the expectation is under the whole training data distribution. It is typically impractical to compute the expectations in Eq. (2) exactly, since it would require forward passes through the whole training dataset with the current set of weights. Instead, µ and Ï are estimated using the empirical samples from the current mini-batch. This puts constraints on the size of a mini-batch and it is hard to apply to recurrent neural networks. # 3 Layer normalization We now consider the layer normalization method which is designed to overcome the drawbacks of batch normalization. Notice that changes in the output of one layer will tend to cause highly correlated changes in the summed inputs to the next layer, especially with ReLU units whose outputs can change by a lot.
1607.06450#2
1607.06450#4
1607.06450
[ "1605.02688" ]
1607.06450#4
Layer Normalization
This suggests the â covariate shiftâ problem can be reduced by ï¬ xing the mean and the variance of the summed inputs within each layer. We, thus, compute the layer normalization statistics over all the hidden units in the same layer as follows: H 1 1 l l w=aydoa, a=, A (3) where H denotes the number of hidden units in a layer. The difference between Eq. (2) and Eq. (3) is that under layer normalization, all the hidden units in a layer share the same normalization terms µ and Ï , but different training cases have different normalization terms. Unlike batch normalization, layer normaliztion does not impose any constraint on the size of a mini-batch and it can be used in the pure online regime with batch size 1.
1607.06450#3
1607.06450#5
1607.06450
[ "1605.02688" ]
1607.06450#5
Layer Normalization
2 # 3.1 Layer normalized recurrent neural networks The recent sequence to sequence models [Sutskever et al., 2014] utilize compact recurrent neural networks to solve sequential prediction problems in natural language processing. It is common among the NLP tasks to have different sentence lengths for different training cases. This is easy to deal with in an RNN because the same weights are used at every time-step. But when we apply batch normalization to an RNN in the obvious way, we need to to compute and store separate statistics for each time step in a sequence. This is problematic if a test sequence is longer than any of the training sequences. Layer normalization does not have such problem because its normalization terms depend only on the summed inputs to a layer at the current time-step. It also has only one set of gain and bias parameters shared over all time-steps. In a standard RNN, the summed inputs in the recurrent layer are computed from the current input xt and previous vector of hidden states htâ 1 which are computed as at = Whhhtâ 1 + Wxhxt.
1607.06450#4
1607.06450#6
1607.06450
[ "1605.02688" ]
1607.06450#6
Layer Normalization
The layer normalized recurrent layer re-centers and re-scales its activations using the extra normalization terms similar to Eq. (3): hâ f(2 © (aâ =n") 4 ot where W),, is the recurrent hidden to hidden weights and W,,;, are the bottom up input to hidden weights. © is the element-wise multiplication between two vectors. b and g are defined as the bias and gain parameters of the same dimension as hâ . In a standard RNN, there is a tendency for the average magnitude of the summed inputs to the recur- rent units to either grow or shrink at every time-step, leading to exploding or vanishing gradients. In a layer normalized RNN, the normalization terms make it invariant to re-scaling all of the summed inputs to a layer, which results in much more stable hidden-to-hidden dynamics.
1607.06450#5
1607.06450#7
1607.06450
[ "1605.02688" ]
1607.06450#7
Layer Normalization
# 4 Related work Batch normalization has been previously extended to recurrent neural networks [Laurent et al., 2015, Amodei et al., 2015, Cooijmans et al., 2016]. The previous work [Cooijmans et al., 2016] suggests the best performance of recurrent batch normalization is obtained by keeping independent normal- ization statistics for each time-step. The authors show that initializing the gain parameter in the recurrent batch normalization layer to 0.1 makes signiï¬ cant difference in the ï¬ nal performance of the model. Our work is also related to weight normalization [Salimans and Kingma, 2016]. In weight normalization, instead of the variance, the L2 norm of the incoming weights is used to normalize the summed inputs to a neuron. Applying either weight normalization or batch normal- ization using expected statistics is equivalent to have a different parameterization of the original feed-forward neural network. Re-parameterization in the ReLU network was studied in the Path- normalized SGD [Neyshabur et al., 2015]. Our proposed layer normalization method, however, is not a re-parameterization of the original neural network. The layer normalized model, thus, has different invariance properties than the other methods, that we will study in the following section.
1607.06450#6
1607.06450#8
1607.06450
[ "1605.02688" ]
1607.06450#8
Layer Normalization
# 5 Analysis In this section, we investigate the invariance properties of different normalization schemes. # Invariance under weights and data transformations The proposed layer normalization is related to batch normalization and weight normalization. Al- though, their normalization scalars are computed differently, these methods can be summarized as normalizing the summed inputs ai to a neuron through the two scalars µ and Ï . They also learn an adaptive bias b and gain g for each neuron after the normalization. hi = f ( gi Ï i (ai â µi) + bi) (5) Note that for layer normalization and batch normalization, 1 and o is computed according to Eq. and[3| In weight normalization, pz is 0, and o = ||wl2. 3 Weight matrix Weight matrix Weight vector re-centering re-scaling re-scaling Dataset re-scaling Dataset re-centering Single training case re-scaling Batch norm Weight norm Layer norm Invariant Invariant Invariant No No Invariant Invariant No Invariant Invariant No No No No Invariant # Invariant Invariant No Table 1: Invariance properties under the normalization methods. Table 1 highlights the following invariance results for three normalization methods. Weight re-scaling and re-centering: First, observe that under batch normalization and weight normalization, any re-scaling to the incoming weights w; of a single neuron has no effect on the normalized summed inputs to a neuron. To be precise, under batch and weight normalization, if the weight vector is scaled by 6, the two scalar jz and o will also be scaled by 6. The normalized summed inputs stays the same before and after scaling. So the batch and weight normalization are invariant to the re-scaling of the weights. Layer normalization, on the other hand, is not invariant to the individual scaling of the single weight vectors. Instead, layer normalization is invariant to scaling of the entire weight matrix and invariant to a shift to all of the incoming weights in the weight matrix. Let there be two sets of model parameters 0, 6â whose weight matrices W and Wâ differ by a scaling factor 6 and all of the incoming weights in Wâ are also shifted by a constant vector Â¥, that is Wâ = 6W + 1+'.
1607.06450#7
1607.06450#9
1607.06450
[ "1605.02688" ]
1607.06450#9
Layer Normalization
Under layer normalization, the two models effectively compute the same output: Sy ( hâ =f(5 (W'xâ 1â ) +b) =f (S ((6W +1y ")x â nwâ ) +b) =e (Wx â p) +b) =h. 6) Notice that if normalization is only applied to the input before the weights, the model will not be invariant to re-scaling and re-centering of the weights. Data re-scaling and re-centering: We can show that all the normalization methods are invariant to re-scaling the dataset by verifying that the summed inputs of neurons stays constant under the changes. Furthermore, layer normalization is invariant to re-scaling of individual training cases, because the normalization scalars jz and o in Eq. (3) only depend on the current input data.
1607.06450#8
1607.06450#10
1607.06450
[ "1605.02688" ]
1607.06450#10
Layer Normalization
Let xâ be a new data point obtained by re-scaling x by 6. Then we have, 2 (wi xâ â p') +b) = Gi Ty _ 50 (dw; x â 5p) + b:) = hi. (7) It is easy to see re-scaling individual data points does not change the modelâ s prediction under layer normalization. Similar to the re-centering of the weight matrix in layer normalization, we can also show that batch normalization is invariant to re-centering of the dataset. # 5.2 Geometry of parameter space during learning We have investigated the invariance of the modelâ s prediction under re-centering and re-scaling of the parameters. Learning, however, can behave very differently under different parameterizations, even though the models express the same underlying function. In this section, we analyze learning behavior through the geometry and the manifold of the parameter space. We show that the normal- ization scalar Ï can implicitly reduce learning rate and makes learning more stable. # 5.2.1 Riemannian metric The learnable parameters in a statistical model form a smooth manifold that consists of all possible input-output relations of the model. For models whose output is a probability distribution, a natural way to measure the separation of two points on this manifold is the Kullback-Leibler divergence between their model output distributions. Under the KL divergence metric, the parameter space is a Riemannian manifold. The curvature of a Riemannian manifold is entirely captured by its Riemannian metric, whose quadratic form is denoted as ds2.
1607.06450#9
1607.06450#11
1607.06450
[ "1605.02688" ]
1607.06450#11
Layer Normalization
That is the inï¬ nitesimal distance in the tangent space at a point in the parameter space. Intuitively, it measures the changes in the model output from the parameter space along a tangent direction. The Riemannian metric under KL was previously studied [Amari, 1998] and was shown to be well approximated under second order Taylor expansion using the Fisher 4 information matrix: ds? = Dru [Ply |x: Ply |x: 6+ 8)] © 557 F(O)6, (8) F (θ) = E xâ ¼P (x),yâ ¼P (y | x) â log P (y | x; θ) â θ â log P (y | x; θ) â θ , (9) where, δ is a small change to the parameters. The Riemannian metric above presents a geometric view of parameter spaces. The following analysis of the Riemannian metric provides some insight into how normalization methods could help in training neural networks. # 5.2.2 The geometry of normalized generalized linear models We focus our geometric analysis on the generalized linear model. The results from the following analysis can be easily applied to understand deep neural networks with block-diagonal approxima- tion to the Fisher information matrix, where each block corresponds to the parameters for a single neuron. A generalized linear model (GLM) can be regarded as parameterizing an output distribution from the exponential family using a weight vector w and bias scalar b. To be consistent with the previous sections, the log likelihood of the GLM can be written using the summed inputs a as the following: log P (y | x; w, b) = (a + b)y â η(a + b) Ï + c(y, Ï ), (10) Ely |x] = f(a +6) = f(w'x +5), Varly|x] = of'(a +d), (1) where, f(-) is the transfer function that is the analog of the non-linearity in neural networks, fâ (-) is the derivative of the transfer function, 7(-) is a real valued function and c(-) is the log parti- tion function. ¢ is a constant that scales the output variance.
1607.06450#10
1607.06450#12
1607.06450
[ "1605.02688" ]
1607.06450#12
Layer Normalization
Assume a H-dimensional output vector y = [y1,Y2,°"* , YH] is modeled using H independent GLMs and log P(y|x; W,b) = yt log P(y: |x; wi, bi). Let W be the weight matrix whose rows are the weight vectors of the individual GLMs, b denote the bias vector of length H and vec(-) denote the Kronecker vector op- erator. The Fisher information matrix for the multi-dimensional GLM with respect to its parameters 6 = [w] ,b1,--- ,wiy,bH]' = vec([W, b]") is simply the expected Kronecker product of the data features and the output covariance matrix: + Covly | x] @ ie 7] ; (12) F(0) = We obtain normalized GLMs by applying the normalization methods to the summed inputs a in the original model through 4: and o. Without loss of generality, we denote Fâ as the Fisher infor- mation matrix under the normalized multi-dimensional GLM with the additional gain parameters 6 = vec([W, b, g]"): Fi: Fin Covk Ix LH yiyT we x eH) _ - ov[yi, yy |X â u FO=]): 5 2 |, By EO a xi 3 1 ante x~P(x Fur --> Fur x} wecnl ca ects te) (13) On, a â [yy OO; â i . 14 NES * Ow; Oo; Ow; (4)
1607.06450#11
1607.06450#13
1607.06450
[ "1605.02688" ]
1607.06450#13
Layer Normalization
Implicit learning rate reduction through the growth of the weight vector: Notice that, com- paring to standard GLM, the block ¯Fij along the weight vector wi direction is scaled by the gain parameters and the normalization scalar Ï i. If the norm of the weight vector wi grows twice as large, even though the modelâ s output remains the same, the Fisher information matrix will be different. The curvature along the wi direction will change by a factor of 1 2 because the Ï i will also be twice as large. As a result, for the same parameter update in the normalized model, the norm of the weight vector effectively controls the learning rate for the weight vector. During learning, it is harder to change the orientation of the weight vector with large norm. The normalization methods, therefore,
1607.06450#12
1607.06450#14
1607.06450
[ "1605.02688" ]
1607.06450#14
Layer Normalization
5 (a) Recall@1 (b) Recall@5 (c) Recall@10 Figure 1: Recall@K curves using order-embeddings with and without layer normalization. MSCOCO Caption Retrieval Image Retrieval Model Sym [Vendrov et al., 2016] OE [Vendrov et al., 2016] OE (ours) OE + LN R@1 R@5 R@10 Mean r R@1 R@5 R@10 Mean r 45.4 46.7 46.6 48.5 88.7 88.9 89.1 89.8 5.8 5.7 5.2 5.1 36.3 37.9 37.8 38.9 85.8 85.9 85.7 86.3 79.3 80.6 73.6 74.3 9.0 8.1 7.9 7.6 Table 2: Average results across 5 test splits for caption and image retrieval. R@K is Recall@K (high is good). Mean r is the mean rank (low is good). Sym corresponds to the symmetric baseline while OE indicates order-embeddings. have an implicit â early stoppingâ effect on the weight vectors and help to stabilize learning towards convergence. Learning the magnitude of incoming weights: In normalized models, the magnitude of the incom- ing weights is explicitly parameterized by the gain parameters. We compare how the model output changes between updating the gain parameters in the normalized GLM and updating the magnitude of the equivalent weights under original parameterization during learning. The direction along the gain parameters in ¯F captures the geometry for the magnitude of the incoming weights. We show that Riemannian metric along the magnitude of the incoming weights for the standard GLM is scaled by the norm of its input, whereas learning the gain parameters for the batch normalized and layer normalized models depends only on the magnitude of the prediction error. Learning the magnitude of incoming weights in the normalized model is therefore, more robust to the scaling of the input and its parameters than in the standard model. See Appendix for detailed derivations. # 6 Experimental results
1607.06450#13
1607.06450#15
1607.06450
[ "1605.02688" ]
1607.06450#15
Layer Normalization
We perform experiments with layer normalization on 6 tasks, with a focus on recurrent neural net- works: image-sentence ranking, question-answering, contextual language modelling, generative modelling, handwriting sequence generation and MNIST classiï¬ cation. Unless otherwise noted, the default initialization of layer normalization is to set the adaptive gains to 1 and the biases to 0 in the experiments. # 6.1 Order embeddings of images and language In this experiment, we apply layer normalization to the recently proposed order-embeddings model of Vendrov et al. [2016] for learning a joint embedding space of images and sentences. We follow the same experimental protocol as Vendrov et al. [2016] and modify their publicly available code to incorporate layer normalization 1 which utilizes Theano [Team et al., 2016]. Images and sen- tences from the Microsoft COCO dataset [Lin et al., 2014] are embedded into a common vector space, where a GRU [Cho et al., 2014] is used to encode sentences and the outputs of a pre-trained VGG ConvNet [Simonyan and Zisserman, 2015] (10-crop) are used to encode images. The order- embedding model represents images and sentences as a 2-level partial ordering and replaces the cosine similarity scoring function used in Kiros et al. [2014] with an asymmetric one. # 1https://github.com/ivendrov/order-embedding
1607.06450#14
1607.06450#16
1607.06450
[ "1605.02688" ]
1607.06450#16
Layer Normalization
6 Attentive reader â _LSTM â BN-LSTM â _ BN-everywhere LN-LSTM ° ea S a validation error rate ° Nu 2° uu Ss & 100 200 300 400 500 600 700 800 training steps (thousands) Figure 2: Validation curves for the attentive reader model. BN results are taken from [Cooijmans et al., 2016]. We trained two models: the baseline order-embedding model as well as the same model with layer normalization applied to the GRU. After every 300 iterations, we compute Recall@K (R@K) values on a held out validation set and save the model whenever R@K improves. The best performing models are then evaluated on 5 separate test sets, each containing 1000 images and 5000 captions, for which the mean results are reported. Both models use Adam [Kingma and Ba, 2014] with the same initial hyperparameters and both models are trained using the same architectural choices as used in Vendrov et al. [2016]. We refer the reader to the appendix for a description of how layer normalization is applied to GRU. Figure 1 illustrates the validation curves of the models, with and without layer normalization. We plot R@1, R@5 and R@10 for the image retrieval task. We observe that layer normalization offers a per-iteration speedup across all metrics and converges to its best validation model in 60% of the time it takes the baseline model to do so. In Table 2, the test set results are reported from which we observe that layer normalization also results in improved generalization over the original model. The results we report are state-of-the-art for RNN embedding models, with only the structure-preserving model of Wang et al. [2016] reporting better results on this task. However, they evaluate under different conditions (1 test set instead of the mean over 5) and are thus not directly comparable. # 6.2 Teaching machines to read and comprehend In order to compare layer normalization to the recently proposed recurrent batch normalization [Cooijmans et al., 2016], we train an unidirectional attentive reader model on the CNN corpus both introduced by Hermann et al. [2015]. This is a question-answering task where a query description about a passage must be answered by ï¬
1607.06450#15
1607.06450#17
1607.06450
[ "1605.02688" ]
1607.06450#17
Layer Normalization
lling in a blank. The data is anonymized such that entities are given randomized tokens to prevent degenerate solutions, which are consistently permuted dur- ing training and evaluation. We follow the same experimental protocol as Cooijmans et al. [2016] and modify their public code to incorporate layer normalization 2 which uses Theano [Team et al., 2016]. We obtained the pre-processed dataset used by Cooijmans et al. [2016] which differs from the original experiments of Hermann et al. [2015] in that each passage is limited to 4 sentences. In Cooijmans et al. [2016], two variants of recurrent batch normalization are used: one where BN is only applied to the LSTM while the other applies BN everywhere throughout the model. In our experiment, we only apply layer normalization within the LSTM. The results of this experiment are shown in Figure 2. We observe that layer normalization not only trains faster but converges to a better validation result over both the baseline and BN variants. In Cooijmans et al. [2016], it is argued that the scale parameter in BN must be carefully chosen and is set to 0.1 in their experiments. We experimented with layer normalization for both 1.0 and 0.1 scale initialization and found that the former model performed signiï¬ cantly better. This demonstrates that layer normalization is not sensitive to the initial scale in the same way that recurrent BN is. 3 # 6.3 Skip-thought vectors Skip-thoughts [Kiros et al., 2015] is a generalization of the skip-gram model [Mikolov et al., 2013] for learning unsupervised distributed sentence representations. Given contiguous text, a sentence is 2https://github.com/cooijmanstim/Attentive_reader/tree/bn 3We only produce results on the validation set, as in the case of Cooijmans et al. [2016]
1607.06450#16
1607.06450#18
1607.06450
[ "1605.02688" ]
1607.06450#18
Layer Normalization
7 (a) SICK(r) (b) SICK(MSE) (c) MR (d) CR (e) SUBJ (f) MPQA Figure 3: Performance of skip-thought vectors with and without layer normalization on downstream tasks as a function of training iterations. The original lines are the reported results in [Kiros et al., 2015]. Plots with error use 10-fold cross validation. Best seen in color. Method SICK(r) SICK(Ï ) SICK(MSE) MR CR SUBJ MPQA Original [Kiros et al., 2015] 0.848 0.778 0.287 75.5 79.3 92.1 86.9 Ours Ours + LN Ours + LN â 0.842 0.854 0.858 0.767 0.785 0.788 0.298 0.277 0.270 77.3 79.5 79.4 81.8 82.6 83.1 92.6 93.4 93.7 87.9 89.0 89.3 Table 3: Skip-thoughts results. The ï¬ rst two evaluation columns indicate Pearson and Spearman cor- relation, the third is mean squared error and the remaining indicate classiï¬ cation accuracy. Higher is better for all evaluations except MSE. Our models were trained for 1M iterations with the exception of (â ) which was trained for 1 month (approximately 1.7M iterations) encoded with a encoder RNN and decoder RNNs are used to predict the surrounding sentences. Kiros et al. [2015] showed that this model could produce generic sentence representations that perform well on several tasks without being ï¬
1607.06450#17
1607.06450#19
1607.06450
[ "1605.02688" ]
1607.06450#19
Layer Normalization
ne-tuned. However, training this model is time- consuming, requiring several days of training in order to produce meaningful results. In this experiment we determine to what effect layer normalization can speed up training. Using the publicly available code of Kiros et al. [2015] 4, we train two models on the BookCorpus dataset [Zhu et al., 2015]: one with and one without layer normalization. These experiments are performed with Theano [Team et al., 2016]. We adhere to the experimental setup used in Kiros et al. [2015], training a 2400-dimensional sentence encoder with the same hyperparameters. Given the size of the states used, it is conceivable layer normalization would produce slower per-iteration updates than without. However, we found that provided CNMeM 5 is used, there was no signiï¬ cant difference between the two models. We checkpoint both models after every 50,000 iterations and evaluate their performance on ï¬ ve tasks: semantic-relatedness (SICK) [Marelli et al., 2014], movie review sentiment (MR) [Pang and Lee, 2005], customer product reviews (CR) [Hu and Liu, 2004], subjectivity/objectivity classiï¬ cation (SUBJ) [Pang and Lee, 2004] and opinion polarity (MPQA) [Wiebe et al., 2005]. We plot the performance of both models for each checkpoint on all tasks to determine whether the performance rate can be improved with LN. The experimental results are illustrated in Figure 3. We observe that applying layer normalization results both in speedup over the baseline as well as better ï¬ nal results after 1M iterations are per- formed as shown in Table 3. We also let the model with layer normalization train for a total of a month, resulting in further performance gains across all but one task. We note that the performance 4https://github.com/ryankiros/skip-thoughts 5https://github.com/NVIDIA/cnmem 8 0 + T T -100} Nae â Baseline test -200 ~ Baseline train â 300 â LN test â 400 LN train â 500/ â 600} -700/ â 800} -900 1 Negative Log Likelihood : : 0° 107 10? 10? Updates x 200 Figure 5:
1607.06450#18
1607.06450#20
1607.06450
[ "1605.02688" ]
1607.06450#20
Layer Normalization
Handwriting sequence generation model negative log likelihood with and without layer normalization. The models are trained with mini-batch size of 8 and sequence length of 500, differences between the original reported results and ours are likely due to the fact that the publicly available code does not condition at each timestep of the decoder, where the original model does. # 6.4 Modeling binarized MNIST using DRAW 100 Baseline â WN 95 : dt on 80,5 ae B00 Epoch 90 85 Test Variational Bound
1607.06450#19
1607.06450#21
1607.06450
[ "1605.02688" ]
1607.06450#21
Layer Normalization
We also experimented with the generative modeling on the MNIST dataset. Deep Recurrent Attention Writer (DRAW) [Gregor et al., 2015] has previously achieved the state-of-the- art performance on modeling the distribution of MNIST dig- its. The model uses a differential attention mechanism and a recurrent neural network to sequentially generate pieces of an image. We evaluate the effect of layer normalization on a DRAW model using 64 glimpses and 256 LSTM hidden units. The model is trained with the default setting of Adam [Kingma and Ba, 2014] optimizer and the minibatch size of 128. Previous publications on binarized MNIST have used various training protocols to generate their datasets. In this experiment, we used the ï¬ xed binarization from Larochelle and Murray [2011]. The dataset has been split into 50,000 training, 10,000 validation and 10,000 test images. Figure 4: DRAW model test nega- tive log likelihood with and without layer normalization. Figure 4 shows the test variational bound for the ï¬
1607.06450#20
1607.06450#22
1607.06450
[ "1605.02688" ]