id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1606.06160#20 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | nition. It is an interesting question whether these differences would cause the ï¬ rst and last layer to exhibit different behavior when converted to low bitwidth counterparts. In the related work of (Han et al., 2015b) which converts network weights to sparse tensors, in- troducing the same ratio of zeros in the ï¬ rst convolutional layer is found to cause more prediction accuracy degradation than in the other convolutional layers. Based on this intuition as well as the observation that the inputs to the ï¬ rst layer often contain only a few channels and constitutes a small proportion of total computation complexity, we perform most of our experiments by not quantizing the weights of the ï¬ rst convolutional layer, unless noted otherwise. Nevertheless, the outputs of the ï¬ rst convolutional layer are quantized to low bitwidth as they would be used by the consequent convolutional layer. Similarly, when the output number of class is small, to stay away from potential degradation of pre- diction accuracy, we leave the last fully-connected layer intact unless noted otherwise. Nevertheless, the gradients back-propagated from the ï¬ nal FC layer are properly quantized. | 1606.06160#19 | 1606.06160#21 | 1606.06160 | [
"1502.03167"
] |
1606.06160#21 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | We will give the empirical evidence in Section 3.3. 2.8 REDUCING RUN-TIME MEMORY FOOTPRINT BY FUSING NONLINEAR FUNCTION AND ROUNDING One of the motivations for creating low bitwidth neural network is to save run-time memory footprint in inference. A naive implementation of Algorithm 1 would store activations h(ak) in full-precision numbers, consuming much memory during run-time. In particular, if h involves ï¬ oating-point arith- metics, there will be non-negligible amount of non-bitwise operations related to computations of h(ak). There are simple solutions to this problem. Notice that it is possible to fuse Step 3, Step 4, Step 6 to avoid storing intermediate results in full-precision. Apart from this, when h is monotonic, fα · h is also monotonic, the few possible values of ab k corresponds to several non-overlapping value ranges of ak, hence we can implement computation of ab k = fα(h(ak)) by several comparisons between ï¬ xed point numbers and avoid generating intermediate results. Similarly, it would also be desirable to fuse Step 11 â ¼ Step 12, and Step 13 of previous iteration to avoid generation and storing of gak . The situation would be more complex when there are inter- mediate pooling layers. Nevertheless, if the pooling layer is max-pooling, we can do the fusion as quantizek function commutes with max function: quantizek(max(a, b)) = max(quantizek(a), quantizek(b))), (15) # hence again gb ak can be generated from gak by comparisons between ï¬ xed-point numbers. 6 | 1606.06160#20 | 1606.06160#22 | 1606.06160 | [
"1502.03167"
] |
1606.06160#22 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | DoReFa-Net Table 1: Comparison of prediction accuracy for SVHN with different choices of Bit-width in a DoReFa-Net. W , A, G are bitwidths of weights, activations and gradients respectively. When bitwidth is 32, we simply remove the quantization functions. W A G Training Complexity Inference Complexity Storage Relative Size Model A Accuracy Model B Accuracy Model C Accuracy 1 1 2 3 1 1 0.934 0.924 0.910 1 1 4 5 1 1 0.968 0.961 0.916 1 1 8 9 1 1 0.970 0.962 0.902 1 1 32 - - 1 0.971 0.963 0.921 1 2 2 4 2 1 0.909 0.930 0.900 1 2 3 5 2 1 0.968 0.964 0.934 1 2 4 6 2 1 0.975 0.969 0.939 2 1 2 6 2 2 0.927 0.928 0.909 2 1 4 10 2 2 0.969 0.957 0.904 1 2 8 10 2 1 0.975 0.971 0.946 1 2 32 - - 1 0.976 0.970 0.950 1 3 3 6 3 1 0.968 0.964 0.946 1 3 4 7 3 1 0.974 0.974 0.959 1 3 6 9 3 1 0.977 0.974 0.949 1 4 2 6 4 1 0.815 0.898 0.911 1 4 4 8 4 1 0.975 0.974 0.962 1 4 8 12 4 1 0.977 0.975 0.955 2 2 2 8 4 1 0.900 0.919 0.856 8 8 8 - - 8 0.970 0.803 0.846 0.828 0.841 0.808 0.878 0.878 0.846 0.827 0.866 0.865 0.887 0.897 0.916 0.868 0.915 0.895 0.842 0.955 | 1606.06160#21 | 1606.06160#23 | 1606.06160 | [
"1502.03167"
] |
1606.06160#23 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | 3 EXPERIMENT RESULTS 3.1 CONFIGURATION SPACE EXPLORATION We explore the conï¬ guration space of combinations of bitwidth of weights, activations and gradients by experiments on the SVHN dataset. The SVHN dataset (Netzer et al., 2011) is a real-world digit recognition dataset consisting of photos of house numbers in Google Street View images. We consider the â croppedâ format of the dataset: 32-by-32 colored images centered around a single character. There are 73257 digits for training, 26032 digits for testing, and 531131 less difï¬ cult samples which can be used as extra training data. The images are resized to 40x40 before fed into network. For convolutions in a DoReFa-Net, if we have W -bit weights, A-bit activations and G-bit gradients, the relative forward and backward computation complexity, storage relative size, can be computed from Eqn. 3 and we list them in Table 1. As it would not be computationally efï¬ cient to use bit con- volution kernels for convolutions between 32-bit numbers, and noting that previous works like BNN and XNOR-net have already compared bit convolution kernels with 32-bit convolution kernels, we will omit the complexity comparison of computation complexity for the 32-bit control experiments. 7 | 1606.06160#22 | 1606.06160#24 | 1606.06160 | [
"1502.03167"
] |
1606.06160#24 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | # DoReFa-Net We use the prediction accuracy of several CNN models on SVHN dataset to evaluate the efï¬ cacy of conï¬ gurations. Model A is a CNN that costs about 80 FLOPs for one 40x40 image, and it consists of seven convolutional layers and one fully-connected layer. Model B, C, D is derived from Model A by reducing the number of channels for all seven convo- lutional layers by 50%, 75%, 87.5%, respectively. The listed prediction accuracy is the maximum accuracy on test set over 200 epochs. We use ADAM (Kingma & Ba, 2014) learning rule with 0.001 as learning rate. In general, having low bitwidth weights, activations and gradients will cause degradation in predic- tion accuracy. But it should be noted that low bitwidth networks will have much reduced resource requirement. As balancing between multiple factors like training time, inference time, model size and accuracy is more a problem of practical trade-off, there will be no deï¬ nite conclusion as which combination of (W, A, G) one should choose. Nevertheless, we ï¬ nd in these experiments that weights, activations and gradients are progressively more sensitive to bitwidth, and using gradients with G â ¤ 4 would signiï¬ cantly degrade prediction accuracy. Based on these observations, we take (W, A) = (1, 2) and G â ¥ 4 as rational combinations and use them for most of our experiments on ImageNet dataset. Table 1 also shows that the relative number of channels signiï¬ cantly affect the prediction quality degradation resulting from bitwidth reduction. For example, there is no signiï¬ cant loss of prediction accuracy when going from 32-bit model to DoReFa-Net for Model A, which is not the case for Model C. We conjecture that â more capableâ models like those with more channels will be less sensitive to bitwidth differences. On the other hand, Table 1 also suggests a method to compensate for the prediction quality degradation, by increasing bitwidth of activations for models with less channels, at the cost of increasing computation complexity for inference and training. However, optimal bitwidth of gradient seems less related to model channel numbers and prediction quality saturates with 8-bit gradients most of the time. 3.2 IMAGENET | 1606.06160#23 | 1606.06160#25 | 1606.06160 | [
"1502.03167"
] |
1606.06160#25 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | We further evaluates DoReFa-Net on ILSVRC12 (Deng et al., 2009) image classiï¬ cation dataset, which contains about 1.2 million high-resolution natural images for training that spans 1000 cat- egories of objects. The validation set contains 50k images. We report our single-crop evaluation result using top-1 accuracy. The images are resized to 224x224 before fed into the network. The results are listed in Table 2. The baseline AlexNet model that scores 55.9% single-crop top-1 accuracy is a best-effort replication of the model in (Krizhevsky et al., 2012), with the second, fourth and ï¬ | 1606.06160#24 | 1606.06160#26 | 1606.06160 | [
"1502.03167"
] |
1606.06160#26 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | fth convolutions split into two parallel blocks. We replace the Local Contrast Renormalization layer with Batch Normalization layer (Ioffe & Szegedy, 2015). We use ADAM learning rule with learning rate 10â 4 at the start, and later decrease learning rate to 10â 5 and consequently 10â 6 when accuracy curves become ï¬ at. From the table, it can be seen that increasing bitwidth of activation from 1-bit to 2-bit and even to 4- bit, while still keep 1-bit weights, leads to signiï¬ cant accuracy increase, approaching the accuracy of model where both weights and activations are 32-bit. Rounding gradients to 6-bit produces similar accuracies as 32-bit gradients, in experiments of â | 1606.06160#25 | 1606.06160#27 | 1606.06160 | [
"1502.03167"
] |
1606.06160#27 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | 1-1-6â v.s. â 1-1-32â , â 1-2-6â v.s. â 1-2-32â , and â 1-3-6â v.s. â 1-3-32â . The rows with â initializedâ means the model training has been initialized with a 32-bit model. It can be seen that there is a considerable gap between the best accuracy of a trained-from-scratch-model and an initialized model. Closing this gap is left to future work. Nevertheless, it show the potential in improving accuracy of DoReFa-Net. 3.2.1 TRAINING CURVES Figure 1 shows the evolution of accuracy v.s. epoch curves of DoReFa-Net. It can be seen that quantizing gradients to be 6-bit does not cause the training curve to be signiï¬ cantly different from not quantizing gradients. However, using 4-bit gradients as in â 1-2-4â leads to signiï¬ cant accuracy degradation. | 1606.06160#26 | 1606.06160#28 | 1606.06160 | [
"1502.03167"
] |
1606.06160#28 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | 8 DoReFa-Net Table 2: Comparison of prediction accuracy for ImageNet with different choices of bitwidth in a DoReFa-Net. W , A, G are bitwidths of weights, activations and gradients respectively. Single- crop top-1 accuracy is given. Note the BNN result is reported by (Rastegari et al., 2016), not by original authors. We do not quantize the ï¬ rst and last layers of AlexNet to low bitwidth, as BNN and XNOR-Net do. W A G Training Complexity Inference Complexity Storage Relative Size AlexNet Accuracy 1 1 6 7 1 1 0.395 1 1 8 9 1 1 0.395 1 1 32 - 1 1 0.279 (BNN) 1 1 32 - 1 1 0.442 (XNOR-Net) 1 1 32 - 1 1 0.401 1 1 32 - 1 1 0.436 (initialized) 1 2 6 8 2 1 0.461 1 2 8 10 2 1 0.463 1 2 32 - 2 1 0.477 1 2 32 - 2 1 0.498 (initialized) 1 3 6 9 3 1 0.471 1 3 32 - 3 1 0.484 1 4 6 - 4 1 0.482 1 4 32 - 4 1 0.503 1 4 32 - 4 1 0.530 (initialized) 8 8 8 - - 8 0.530 32 32 32 - - 32 0.559 3.2.2 HISTOGRAM OF WEIGHTS, ACTIVATIONS AND GRADIENTS Figure 2 shows the histogram of gradients of layer â conv3â of â 1-2-6â | 1606.06160#27 | 1606.06160#29 | 1606.06160 | [
"1502.03167"
] |
1606.06160#29 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | AlexNet model at epoch 5 and 35. As the histogram remains mostly unchanged with epoch number, we omit the histograms of the other epochs for clarity. Figure 3(a) shows the histogram of weights of layer â conv3â of â 1-2-6â AlexNet model at epoch 5, 15 and 35. Though the scale of the weights changes with epoch number, the distribution of weights are approximately symmetric. Figure 3(b) shows the histogram of activations of layer â | 1606.06160#28 | 1606.06160#30 | 1606.06160 | [
"1502.03167"
] |
1606.06160#30 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | conv3â of â 1-2-6â AlexNet model at epoch 5, 15 and 35. The distributions of activations are stable throughout the training process. 3.3 MAKING FIRST AND LAST LAYER LOW BITWIDTH To answer the question whether the ï¬ rst and the last layer need to be treated specially when quan- tizing to low bitwidth, we use the same models A, B, C from Table 1 to ï¬ nd out if it is cost-effective to quantize the ï¬ rst and last layer to low bitwidth, and collect the results in Table 3. It can be seen that quantizing ï¬ rst and the last layer indeed leads to signiï¬ cant accuracy degradation, and models with less number of channels suffer more. The degradation to some extent justiï¬ es the practices of BNN and XNOR-net of not quantizing these two layers. | 1606.06160#29 | 1606.06160#31 | 1606.06160 | [
"1502.03167"
] |
1606.06160#31 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | 9 DoReFa-Net Accuracy Epoch Figure 1: Prediction accuracy of AlexNet variants on Validation Set of ImageNet indexed by epoch number. â W-A-Gâ gives the speciï¬ cation of bitwidths of weights, activations and gradients. E.g., â 1-2-4â stands for the case when weights are 1-bit, activations are 2-bit and gradients are 4-bit. The ï¬ gure is best viewed in color. (a) (b) Figure 2: | 1606.06160#30 | 1606.06160#32 | 1606.06160 | [
"1502.03167"
] |
1606.06160#32 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | Histogram of gradients of layer â conv3â of â 1-2-6â AlexNet model at epoch 5 and 35. The y-axis is in logarithmic scale. # 4 DISCUSSION AND RELATED WORK By binarizing weights and activations, binarized neural networks like BNN and XNOR-Net have enabled acceleration of the forward pass of neural network with bit convolution kernel. However, the backward pass of binarized networks still requires convolutions between ï¬ oating-point gradients and weights, which could not efï¬ ciently exploit bit convolution kernel as gradients are in general not low bitwidth numbers. | 1606.06160#31 | 1606.06160#33 | 1606.06160 | [
"1502.03167"
] |
1606.06160#33 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | 10 DoReFa-Net (b) (a) Figure 3: (a) is histogram of weights of layer â conv3â of â 1-2-6â AlexNet model at epoch 5, 15 and 35. There are two possible values at a speciï¬ c epoch since the weights are scaled 1-bit. (b) is histogram of activation of layer â conv3â of â 1-2-6â AlexNet model at epoch 5, 15 and 35. There are four possible values at a speciï¬ c epoch since the activations are 2-bit. Table 3: Control experiments for investigation on theh degredation cost by quantizing the ï¬ rst con- volutional layer and the last FC layer to low bitwidth. The row with â (1, 2, 4)â stands for the baseline case of (W, A, G) = (1, 2, 4) and not quantizing the ï¬ rst and last layers. â + ï¬ rstâ means addition- ally quantizing the weights and gradients of the ï¬ rst convolutional layer (outputs of the ï¬ rst layer are already quantized in the base â (1,2,4)â scheme). â + lastâ means quantizing the inputs, weights and gradients of the last FC layer. Note that outputs of the last layer do not need quantization. Scheme Model A Accuracy Model B Accuracy Model C Accuracy (1, 2, 4) 0.975 0.969 0.939 (1, 2, 4) + ï¬ rst 0.972 0.963 0.932 (1, 2, 4) + last 0.973 0.969 0.927 (1, 2, 4) + ï¬ rst + last 0.971 0.961 0.928 (Lin et al., 2015) makes a step further towards low bitwidth gradients by converting some multipli- cations to bit-shift. However, the number of additions between high bitwidth numbers remains at the same order of magnitude as before, leading to reduced overall speedup. There is also another series of work (Seide et al., 2014) that quantizes gradients before communi- cation in distributed computation settings. | 1606.06160#32 | 1606.06160#34 | 1606.06160 | [
"1502.03167"
] |
1606.06160#34 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | However, the work is more concerned with decreasing the amount of communication trafï¬ c, and does not deal with the bitwidth of gradients used in back- propagation. In particular, they use full precision gradients during the backward pass, and quantize the gradients only before sending them to other computation nodes. In contrast, we quantize gradi- ents each time before they reach the selected convolution layers during the backward pass. To the best of our knowledge, our work is the ï¬ rst to reduce the bitwidth of gradient to 6-bit and lower, while still achieving comparable prediction accuracy without altering other aspects of neural network model, such as increasing the number of channels, for models as large as AlexNet on ImageNet dataset. | 1606.06160#33 | 1606.06160#35 | 1606.06160 | [
"1502.03167"
] |
1606.06160#35 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | 11 DoReFa-Net # 5 CONCLUSION AND FUTURE WORK We have introduced DoReFa-Net, a method to train a convolutional neural network that has low bitwidth weights and activations using low bitwidth parameter gradients. We ï¬ nd that weights and activations can be deterministically quantized while gradients need to be stochastically quantized. As most convolutions during forward/backward passes are now taking low bitwidth weights and activations/gradients respectively, DoReFa-Net can use the bit convolution kernels to accelerate both training and inference process. Our experiments on SVHN and ImageNet datasets demonstrate that DoReFa-Net can achieve comparable prediction accuracy as their 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1% top-1 accuracy on ImageNet validation set. As future work, it would be interesting to investigate using FPGA to train DoReFa-Net, as the O(B2) resource requirement of computation units for B-bit arithmetic on FPGA strongly favors low bitwidth convolutions. | 1606.06160#34 | 1606.06160#36 | 1606.06160 | [
"1502.03167"
] |
1606.06160#36 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | # REFERENCES Abadi, Martın, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Cor- rado, Greg S, Davis, Andy, Dean, Jeffrey, Devin, Matthieu, et al. Tensorï¬ ow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorï¬ ow. org. Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. Bengio, Yoshua, L´eonard, Nicholas, and Courville, Aaron. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. | 1606.06160#35 | 1606.06160#37 | 1606.06160 | [
"1502.03167"
] |
1606.06160#37 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | Chen, Tianshi, Du, Zidong, Sun, Ninghui, Wang, Jia, Wu, Chengyong, Chen, Yunji, and Temam, Olivier. Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning. In ACM Sigplan Notices, volume 49, pp. 269â 284. ACM, 2014a. Chen, Yunji, Luo, Tao, Liu, Shaoli, Zhang, Shijin, He, Liqiang, Wang, Jia, Li, Ling, Chen, Tianshi, Xu, Zhiwei, Sun, Ninghui, et al. Dadiannao: | 1606.06160#36 | 1606.06160#38 | 1606.06160 | [
"1502.03167"
] |
1606.06160#38 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | A machine-learning supercomputer. In Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture, pp. 609â 622. IEEE Computer Society, 2014b. Courbariaux, Matthieu and Bengio, Yoshua. Binarynet: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016. Courbariaux, Matthieu, Bengio, Yoshua, and David, Jean-Pierre. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024, 2014. | 1606.06160#37 | 1606.06160#39 | 1606.06160 | [
"1502.03167"
] |
1606.06160#39 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248â 255. IEEE, 2009. Farabet, Cl´ement, LeCun, Yann, Kavukcuoglu, Koray, Culurciello, Eugenio, Martini, Berin, Ak- selrod, Polina, and Talay, Selcuk. Large-scale fpga-based convolutional networks. Scaling up Machine Learning: Parallel and Distributed Approaches, pp. 399â 419, 2011. Gong, Yunchao, Liu, Liu, Yang, Ming, and Bourdev, Lubomir. | 1606.06160#38 | 1606.06160#40 | 1606.06160 | [
"1502.03167"
] |
1606.06160#40 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014. Gupta, Suyog, Agrawal, Ankur, Gopalakrishnan, Kailash, and Narayanan, Pritish. Deep learning with limited numerical precision. arXiv preprint arXiv:1502.02551, 2015. Han, Song, Mao, Huizi, and Dally, William J. Deep compression: Compressing deep neural net- works with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a. | 1606.06160#39 | 1606.06160#41 | 1606.06160 | [
"1502.03167"
] |
1606.06160#41 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | 12 DoReFa-Net Han, Song, Pool, Jeff, Tran, John, and Dally, William. Learning both weights and connections for efï¬ cient neural network. In Advances in Neural Information Processing Systems, pp. 1135â 1143, 2015b. Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep, Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural net- works for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82â 97, 2012a. Hinton, Geoffrey, Srivastava, Nitsh, and Swersky, Kevin. Neural networks for machine learning. Coursera, video lectures, 264, 2012b. Ioffe, Sergey and Szegedy, Christian. | 1606.06160#40 | 1606.06160#42 | 1606.06160 | [
"1502.03167"
] |
1606.06160#42 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Kim, Minje and Smaragdis, Paris. Bitwise neural networks. arXiv preprint arXiv:1601.06071, 2016. Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬ cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â 1105, 2012. Li, Fengfu and Liu, Bin. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016. Lin, Zhouhan, Courbariaux, Matthieu, Memisevic, Roland, and Bengio, Yoshua. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015. Merolla, Paul, Appuswamy, Rathinakumar, Arthur, John, Esser, Steve K, and Modha, Dharmendra. Deep neural networks are robust to weight binarization and other non-linear distortions. arXiv preprint arXiv:1606.01981, 2016. Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Read- ing digits in natural images with unsupervised feature learning. In NIPS workshop on deep learn- ing and unsupervised feature learning, volume 2011, pp. 5. | 1606.06160#41 | 1606.06160#43 | 1606.06160 | [
"1502.03167"
] |
1606.06160#43 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | Granada, Spain, 2011. Pham, Phi-Hung, Jelaca, Darko, Farabet, Clement, Martini, Berin, LeCun, Yann, and Culurciello, Eugenio. Neuï¬ ow: Dataï¬ ow vision processing system-on-a-chip. In Circuits and Systems (MWS- CAS), 2012 IEEE 55th International Midwest Symposium on, pp. 1044â 1047. IEEE, 2012. Rastegari, Mohammad, Ordonez, Vicente, Redmon, Joseph, and Farhadi, Ali. Xnor-net: Ima- genet classiï¬ cation using binary convolutional neural networks. arXiv preprint arXiv:1603.05279, 2016. Seide, Frank, Fu, Hao, Droppo, Jasha, Li, Gang, and Yu, Dong. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns. In INTERSPEECH, pp. 1058â 1062, 2014. Vanhoucke, Vincent, Senior, Andrew, and Mao, Mark Z. | 1606.06160#42 | 1606.06160#44 | 1606.06160 | [
"1502.03167"
] |
1606.06160#44 | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients | Improving the speed of neural networks on cpus. In Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, volume 1, 2011. Wu, Jiaxiang, Leng, Cong, Wang, Yuhang, Hu, Qinghao, and Cheng, Jian. Quantized convolutional neural networks for mobile devices. arXiv preprint arXiv:1512.06473, 2015. 13 | 1606.06160#43 | 1606.06160 | [
"1502.03167"
] |
|
1606.04460#0 | Model-Free Episodic Control | 6 1 0 2 n u J 4 1 ] L M . t a t s [ 1 v 0 6 4 4 0 . 6 0 6 1 : v i X r a # Model-Free Episodic Control Charles Blundell Google DeepMind [email protected] Benigno Uria Google DeepMind [email protected] Alexander Pritzel Google DeepMind [email protected] # Yazhe Li Google DeepMind [email protected] Avraham Ruderman Google DeepMind [email protected] Joel Z Leibo Google DeepMind [email protected] Jack Rae Google DeepMind [email protected] Daan Wierstra Google DeepMind [email protected] Demis Hassabis Google DeepMind [email protected] # Abstract State of the art deep reinforcement learning algorithms take many millions of inter- actions to attain human-level performance. Humans, on the other hand, can very quickly exploit highly rewarding nuances of an environment upon ï¬ | 1606.04460#1 | 1606.04460 | [
"1512.08457"
] |
|
1606.04460#1 | Model-Free Episodic Control | rst discovery. In the brain, such rapid learning is thought to depend on the hippocampus and its capacity for episodic memory. Here we investigate whether a simple model of hippocampal episodic control can learn to solve difï¬ cult sequential decision- making tasks. We demonstrate that it not only attains a highly rewarding strategy signiï¬ cantly faster than state-of-the-art deep reinforcement learning algorithms, but also achieves a higher overall reward on some of the more challenging domains. # 1 Introduction Deep reinforcement learning has recently achieved notable successes in a variety of domains [23, 32]. | 1606.04460#0 | 1606.04460#2 | 1606.04460 | [
"1512.08457"
] |
1606.04460#2 | Model-Free Episodic Control | However, it is very data inefï¬ cient. For example, in the domain of Atari games [2], deep Reinforcement Learning (RL) systems typically require tens of millions of interactions with the game emulator, amounting to hundreds of hours of game play, to achieve human-level performance. As pointed out by [13], humans learn to play these games much faster. This paper addresses the question of how to emulate such fast learning abilities in a machineâ without any domain-speciï¬ c prior knowledge. | 1606.04460#1 | 1606.04460#3 | 1606.04460 | [
"1512.08457"
] |
1606.04460#3 | Model-Free Episodic Control | Current deep RL algorithms may happen upon, or be shown, highly rewarding sequences of actions. Unfortunately, due to their slow gradient-based updates of underlying policy or value functions, these algorithms require large numbers of steps to assimilate such information and translate it into policy improvement. Thus these algorithms lack the ability to rapidly latch onto successful strategies. Episodic control, introduced by [16], is a complementary approach that can rapidly re-enact observed, successful policies. Episodic control records highly rewarding experiences and follows a policy that replays sequences of actions that previously yielded high returns. In the brain, this form of very fast learning is critically supported by the hippocampus and related medial temporal lobe structures [1, 34]. | 1606.04460#2 | 1606.04460#4 | 1606.04460 | [
"1512.08457"
] |
1606.04460#4 | Model-Free Episodic Control | For example, a ratâ s performance on a task requiring navigation to a hidden platform is impaired by lesions to these structures [24, 36]. Hippocampal learning is thought to be instance-based [18, 35], in contrast to the cortical system which represents generalised statistical summaries of the input distribution [20, 27, 41]. The hippocampal system may be used to guide sequential decision-making by co-representing environment states with the returns achieved from the various possible actions. After such encoding, at a given probe state, the return associated to each possible action could be retrieved by pattern completion in the CA3 subregion [9, 21, 26, 40]. The ï¬ nal value achieved by a sequence of actions could quickly become associated with each of its component state-action pairs by the reverse-ordered replay of hippocampal place cell activations that occurs after a rewarding event [7]. Humans and animals utilise multiple learning, memory, and decision systems each best suited to different settings [5, 33]. For example, when an accurate model of the environment is available, and there are sufï¬ cient time and working memory resources, the best strategy is model-based planning associated with prefrontal cortex [5]. However, when there is no time or no resources available for planning, the less compute-intensive immediate decision systems must be employed [29]. This presents a problem early on in the learning of a new environment as the model-free decision system will be even less accurate in this case since it has not yet had enough repeated experience to learn an accurate value function. In contrast, this is the situation where model-free episodic control may be most useful. Thus the argument for hippocampal involvement in model-free control parallels the argument for its involvement in model-based control. In both cases quick-to-learn instance-based control policies serve as a rough approximation while a slower more generalisable decision system is trained up [16]. The domain of applicability of episodic control may be hopelessly limited by the complexity of the world. In real environments the same exact situation is rarely, if ever, revisited. In RL terms, repeated visits to the exactly the same state are also extremely rare. Here we show that the commonly used Atari environments do not have this property. In fact, we show that the agents developed in this work re-encounter exactly the same Atari states between 10-60% of the time. | 1606.04460#3 | 1606.04460#5 | 1606.04460 | [
"1512.08457"
] |
1606.04460#5 | Model-Free Episodic Control | As expected, the episodic controller works well in such a setting. The key test for this approach is whether it can also work in more realistic environments where states are never repeated and generalisation over similar states is essential. Critically, we also show that our episodic control model still performs well in such (3D) environments where the same state is essentially never re-visited. # 2 The episodic controller In reinforcement learning [e.g.|37], an agent interacts with an environment through a sequence of states, s, â ¬ S; actions, a; â ¬ A; and rewards r;41 â ¬ IR. Actions are determined by the agentâ s policy 1 (az|8z), a probability distribution over the action a;. The goal of the agent is to learn a policy that maximises the expected discounted return Ry = yet 7~1r447 where T is the time step at which each episode ends, and + â ¬ (0, 1] the discount rate. Upon executing an action a, the agent transitions from state s;, to state $141. Environments with deterministic state transitions and rewards are common in daily experience. For example, in navigation, when you exit a room and then return back, you usually end up in the room where you started. This property of natural environments can be exploited by RL algorithms or brains. However, most existing scalable deep RL algorithms (such as DQN [23] and A3C [22]) do not do so. They were designed with more general environments in mind. Thus, in principle, they could operate in regimes with high degrees of stochasticity in both transitions and rewards. This generality comes at the cost of longer learning times. DQN and A3C both attempt to ï¬ nd a policy with maximal expected return. Evaluating the expected return requires many examples in order to get accurate estimates. Additionally, these algorithms are further slowed down by gradient descent learning, typically in lock-step with the rate at which actions are taken in the environment. Given the ubiquity of such near-deterministic situations in the real world, it would be surprising if the brain did not employ specialised learning mechanisms to exploit this structure and thereby learn more quickly in such cases. The episodic controller model of hippocampal instance-based learning we propose here is just such a mechanism. | 1606.04460#4 | 1606.04460#6 | 1606.04460 | [
"1512.08457"
] |
1606.04460#6 | Model-Free Episodic Control | It is a non-parametric model that rapidly records and replays the sequence of actions that so far yielded the highest return from a given start state. In its simplest form, it is a growing table, indexed by states and actions. By analogy with RL value functions, we denote this table QEC(s, a). Each entry contains the highest return ever obtained by taking action a from state s. 2 The episodic control policy picks the action with the highest value in QEC for the given state. At the end of each episode, QEC is updated according to the return received as follows: 7 EC EC Ry, if (st,a4) â ¬ Qh, Qâ ¢(st,41) BC ( . ) (1) max {Q®°(s;,a1), Ri} otherwise, where Rt is the discounted return received after taking action at in state st. Note that (1) is not a general purpose RL learning update: since the stored value can never decrease, it is not suited to rational action selection in stochastic environments.1 Tabular RL methods suffer from two key deï¬ ciencies: ï¬ rstly, for large problems they consume a large amount of memory, and secondly, they lack a way to generalise across similar states. To address the ï¬ rst problem, we limit the size of the table by removing the least recently updated entry once a maximum size has been reached. Such forgetting of older, less frequently accessed memories also occurs in the brain [8]. In large scale RL problems (such as real life) novel states are common; the real world, in general, also has this property. We address the problem of what to do in novel states and how to generalise values across common experiences by taking QEC to be a non-parametric nearest-neighbours model. Let us assume that the state space S is imbued with a metric distance. For states that have never been visited, QEC is approximated by averaging the value of the k nearest states. Thus if s is a novel state then QEC is estimated as OF (5,0) = i na Q) ,a) otherwise, where s(i), i = 1, . . . , k are the k states with the smallest distance to state s.2 Algorithm 1 describes the most basic form of the model-free episodic control. The algorithm has two phases. | 1606.04460#5 | 1606.04460#7 | 1606.04460 | [
"1512.08457"
] |
1606.04460#7 | Model-Free Episodic Control | First, the policy implied by QEC is executed for a full episode, recording the rewards received at each step. This is done by projecting each observation from the environment ot via an embedding function Ï to a state in an appropriate state space: st = Ï (ot), then selecting the action with the highest estimated return according to QEC. In the second phase, the rewards, actions and states from an episode are associated via a backward replay process into QEC to improve the policy. Interestingly, this backward replay process is a potential algorithmic instance of the awake reverse replay of hippocampal states shown by [7], although as yet, we are unaware of any experiments testing this interesting use of hippocampus. # Algorithm 1 Model-Free Episodic Control. 1: for each episode do 2 for t = 1,2,3,...,T do 3 Receive observation o; from environment. 4: Let s, = $(0). 5: Estimate return for each action a via 6 Let a; = arg max, QFC(s;, a) 7 Take action a;, receive reward rp41 8: end for 9: fort =7,Tâ 1,...,1do 10: Update QF°(s;, az) using R; according to 11: end for 12: end for The episodic controller acts according to the returns recorded in QEC, in an attempt to replay successful sequences of actions and recreate past successes. The values stored in QEC(s, a) thus do 1Following a policy that picks the action with the highest QEC value would yield a strong risk seeking behaviour in stochastic environments. It is also possible to, instead, remove the max operator and store Rt directly. This yields a less optimistic estimate and performed worse in preliminary experiments. 2 In practice, we implemented this by having one kNN buffer for each action a â A and ï¬ nding the k closest states in each buffer to state s. | 1606.04460#6 | 1606.04460#8 | 1606.04460 | [
"1512.08457"
] |
1606.04460#8 | Model-Free Episodic Control | 3 not correspond to estimates of the expected return, rather they are estimates of the highest potential return for a given state and action, based upon the states, rewards and actions seen. Computing and behaving according to such a value function is useful in regimes where exploitation is more important than exploration, and where there is relatively little noise in the environment. # 3 Representations In the brain, the hippocampus operates on a representation which notably includes the output of the ventral stream [3, 15, 38]. Thus it is expected to generalise along the dimensions of that representation space [19]. Similarly, the feature mapping, Ï , can play a critical role in how our episodic control algorithm performs when it encounters novel states3. Whilst the original observation space could be used, this may not work in practice. For example, each frame in the environments we consider in Section 4 would occupy around 28 KBytes of memory and would require more than 300 gigabytes of memory for our experiments. Instead we consider two different embeddings of observations into a state space, Ï , each having quite distinctive properties in setting the inductive bias of the QEC estimator. One way of decreasing memory and computation requirements is to utilise a random projection of the original observations into a smaller-dimensional space, i.e. ¢ : x â Az, where A â ¬ R?*? and F' < D where D is the dimensionality of the observation. For a random matrix A with entries drawn from a standard Gaussian, the Johnson-Lindenstrauss lemma implies that this transformation approximately preserves relative distances in the original space [10]. We expect this representation to be sufficient when small changes in the original observation space correspond to small changes in the underlying return. For some environments, many aspects of the observation space are irrelevant for value prediction. For example, illumination and textured surfaces in 3D environments (e.g. Labyrinth in Section 4), and scrolling backgrounds in 2D environments (e.g. River Raid in Section 4) may often be irrele- vant. In these cases, small distances in the original observation space may not be correlated with small distances in action-value. A feature extraction method capable of extracting a more abstract representation of the state space (e.g. 3D geometry or the position of sprites in the case of 2D video-games) could result in a more suitable distance calculation. | 1606.04460#7 | 1606.04460#9 | 1606.04460 | [
"1512.08457"
] |
1606.04460#9 | Model-Free Episodic Control | Abstract features can be obtained by using latent-variable probabilistic models. Variational autoencoders (VAE; [12, 30]), further described in the supplementary material, have shown a great deal of promise across a wide range of unsupervised learning problems on images. Interestingly, the latent representations learnt by VAEs in an unsupervised fashion can lie on well structured manifolds capturing salient factors of variation [12, Figures 4(a) and (b)]; [30, Figure 3(b)]. In our experiments, we train the VAEs on frames from an agent acting randomly. Using a different data source will yield different VAE features, and in principle features from one task can be used in another. Furthermore, the distance metric for comparing embeddings could also be learnt. We leave these two interesting extensions to future work. # 4 Experimental results We tested our algorithm on two environments: the Arcade Learning Environment (Atari) [2], and a ï¬ rst-person 3-dimensional environment called Labyrinth [22]. Videos of the trained agents are available online4. The Arcade Learning Environment is a suite of arcade games originally developed for the Atari-2600 console. These games are relatively simple visually but require complex and precise policies to achieve high expected reward [23]. Labyrinth provides a more complex visual experience, but requires relatively simple policies e.g. turning when in the presence of a particular visual cue. The three Labyrinth environments are foraging tasks with appetitive, adversive and sparse appetitive reward structures, respectively. 3One way to understand this is that this feature mapping Ï determines the dynamic discretization of the state-space into Voronoi cells implied by the k-nearest neighbours algorithm underlying the episodic controller. 4https://sites.google.com/site/episodiccontrol/ 4 For each environment, we tested the performance of the episodic controller using two embeddings of the observations Ï : (1) 64 random-projections of the pixel observations and (2) the 64 parameters of a Gaussian approximation to the posterior over the latent dimensions in a VAE. For the experiments that use latent features from a VAE, a random policy was used for one million frames at the beginning of training, these one million observations were used to train the VAE. The episodic controller is started after these one million frames, and uses the features obtained from the VAE. | 1606.04460#8 | 1606.04460#10 | 1606.04460 | [
"1512.08457"
] |
1606.04460#10 | Model-Free Episodic Control | Both mean and log-standard-deviation parameters were used as dimensions in the calculation of Euclidean distances. To account for the initial phase of training we displaced performance curves for agents that use VAE features by one million frames. # 4.1 Atari For the Atari experiments we considered a set of ï¬ ve games, namely: Ms. PAC-MAN, Q*bert, River Raid, Frostbite, and Space Invaders. We compared our algorithm to the original DQN algorithm [23], to DQN with prioritised replay [31], and to the asynchronous advantage actor-critic [22] (A3C), a state-of-the-art policy gradient method 5. | 1606.04460#9 | 1606.04460#11 | 1606.04460 | [
"1512.08457"
] |
1606.04460#11 | Model-Free Episodic Control | Following [23], observations were rescaled to 84 by 84 pixels and converted to gray-scale. The Atari simulator produces 60 observations (frames) per second of game play. The agents interact with the environment 15 times per second, as actions are repeated 4 times to decrease the computational requirements. An hour of game play corresponds to approximately 200,000 frames. In the episodic controller, the size of each buffer (one per action) of state-value pairs was limited to one million entries. If the buffer is full and a new state-value pair has to be introduced, the least recently used state is discarded. The k-nearest-neighbour lookups used k = 11. The discount rate was set to y = 1. Exploration is achieved by using an ¢-greedy policy with « = 0.005. We found that higher exploration rates were not as beneficial, as more exploration makes exploiting what is known harder. Note that previously published exploration rates (e.g., [22||23]) are at least a factor of ten higher. Thus interestingly, our method attains good performance on a wide range of domains with relatively little random exploration. Results are shown in the top two rows of Figure 1. In terms of data efï¬ ciency the episodic controller outperformed all other algorithms during the initial learning phase of all games. On Q*bert and River Raid, the episodic controller is eventually overtaken by some of the parametric controllers (not shown in Figure 1). After an initial phase of fast learning the episodic controller was limited by the decrease in the relative amount of new experience that could be obtained in each episode as these become longer. In contrast the parametric controllers could utilise their non-local generalisation capabilities to handle the later stages of the games. The two different embeddings (random projections and VAE) did not have a notable effect on the performance of the episodic control policies. Both representations proved more data efï¬ cient than the parametric policies. The only exception is Frostbite where the VAE features perform noticeably worse. This may be due to the inability of a random policy to reach very far in the game, which results in a very poor training-set for the VAE. Deep Q-networks and A3C exhibited a slow pace of policy improvement in Atari. | 1606.04460#10 | 1606.04460#12 | 1606.04460 | [
"1512.08457"
] |
1606.04460#12 | Model-Free Episodic Control | For Frostbite and Ms. PAC-MAN, this has, sometimes, been attributed to naive exploration techniques [13}|28]. Our results demonstrate that a simple exploration technique like e-greedy can result in much faster policy improvements when combined with a system that is able to learn in a one-shot fashion. The Atari environment has deterministic transitions and rewards. Each episode starts at one of thirty possible initial states. Therefore a sizeable percentage of states-action pairs are exactly matched in the buffers of Q-values: about 10% for Frostbite, 60% for Q*bert, 50% for Ms. PAC-MAN, 45% for Space Invaders, and 10% for River Raid. In the next section we report experiments on a set of more realistic environments where the same exact experience is seldom encountered twice. 5We are forever indebted to Tom Schaul for the prioritised replay baseline and Andrei Rusu for the A3C baseline. 5 Ms. Pac-Man Space Invaders Frostbite 2.5 4.0 Â¥ 6 5 2.0 35 i) . 3.0 34 2.5 i 15 . F 3 2.0 £ 1.0 3? 0 S 0.5 . gl 0.5 % 0 0.0 0.0 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Q*bert River Raid wu 14 12 212 rf 10 10 2. ® E 6 <£ 6 S 2 2 0 ) 0 10 20 30 40 50 0 10 20 30 40 50 35 Forage 14 Forage & Avoid Double T-Maze 30 25 20 15 10 12 10 Scores ON BODO PORN WAU 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Millions of Frames Millions of Frames Millions of Frames â â | 1606.04460#11 | 1606.04460#13 | 1606.04460 | [
"1512.08457"
] |
1606.04460#13 | Model-Free Episodic Control | DQN = Prioritised DQN â â â A3C â â = EC-VAE â â EC-RP Figure 1: Average reward vs. number of frames (in millions) experienced for ï¬ ve Atari games and three Labyrinth environments. Dark curves show the mean of ï¬ ve runs (results from only one run were available for DQN baselines) initialised with different random number seeds. Light shading shows the standard error of the mean across runs. Episodic controllers (orange and blue curves) outperform parametric Q-function estimators (light green and pink curves) and A3C (dark green curve) in the initial phase of learning. VAE curves start after one million frames to account for their training using a random policy. | 1606.04460#12 | 1606.04460#14 | 1606.04460 | [
"1512.08457"
] |
1606.04460#14 | Model-Free Episodic Control | # 4.2 Labyrinth The Labyrinth experiments involved three levels (screenshots are shown in Figure 2). The environment runs at 60 observations (frames) per simulated second of physical time. Observations are gray-scale images of 84 by 84 pixels. The agent interacts with the environment 15 times per second; actions are automatically repeated for 4 frames (to reduce computational requirements). The agent has eight different actions available to it (move-left, move-right, turn-left, turn-right, move-forward, move- backwards, move-forward and turn-left, move-forward and turn-right). In the episodic controller, the size of each buffer (one per action) of state-value pairs was limited to one hundred thousand entries. When the buffer was full and a new state-value pair had to be introduced, the least recently used | 1606.04460#13 | 1606.04460#15 | 1606.04460 | [
"1512.08457"
] |
1606.04460#15 | Model-Free Episodic Control | 6 (b) (c) (a) Figure 2: High-resolution screenshots of the Labyrinth environments. (a) Forage and Avoid showing the apples (positive rewards) and lemons (negative rewards). (b) Double T-maze showing cues at the turning points. (c) Top view of a Double T-maze conï¬ guration. The cues indicate the reward is located at the top left. state was discarded. The k-nearest-neighbour lookups used k = 50. The discount rate was set to 7 = 0.99. Exploration is achieved by using an e-greedy policy with « = 0.005. As a baseline, we used A3C [22]. Labyrinth levels have deterministic transitions and rewards, but the initial location and facing direction are randomised, and the environment is much richer, being 3-dimensional. For this reason, unlike Atari, experiments on Labyrinth encounter very few exact matches in the buffers of QF°-values; less than 0.1% in all three levels. Each level is progressively more difï¬ | 1606.04460#14 | 1606.04460#16 | 1606.04460 | [
"1512.08457"
] |
1606.04460#16 | Model-Free Episodic Control | cult. The ï¬ rst level, called Forage, requires the agent to collect apples as quickly as possible by walking through them. Each apple provides a reward of 1. A simple policy of turning until an apple is seen and then moving towards it sufï¬ ces here. Figure 1 shows that the episodic controller found an apple seeking policy very quickly. Eventually A3C caught up, and ï¬ nal outperforms the episodic controller with a more efï¬ cient strategy for picking up apples. The second level, called Forage and Avoid involves collecting apples, which provide a reward of 1, while avoiding lemons which incur a reward of â 1. The level is depicted in Figure 2(a). This level requires only a slightly more complicated policy then Forage (same policy plus avoid lemons) yet A3C took over 40 million steps to the same performance that episodic control attained in fewer than 3 million frames. The third level, called Double-T-Maze, requires the agent to walk in a maze with four ends (a map is shown in Figure 2(c)) one of the ends contains an apple, while the other three contain lemons. At each intersection the agent is presented with a colour cue that indicates the direction in which the apple is located (see Figure 2(b)): left, if red, or right, if green. If the agent walks through a lemon it incurs a reward of â | 1606.04460#15 | 1606.04460#17 | 1606.04460 | [
"1512.08457"
] |
1606.04460#17 | Model-Free Episodic Control | 1. However, if it walks through the apple, it receives a reward of 1, is teleported back to the starting position and the location of the apple is resampled. The duration of an episode is limited to 1 minute in which it can reach the apple multiple times if it solves the task fast enough. Double-T-Maze is a difï¬ cult RL problem: rewards are sparse. In fact, A3C never achieved an expected reward above zero. Due to the sparse reward nature of the Double T-Maze level, A3C did not update the policy strongly enough in the few instances in which a reward is encountered through random diffusion in the state space. In contrast, the episodic controller exhibited behaviour akin to one-shot learning on these instances, and was able to learn from the very few episodes that contain any rewards different from zero. This allowed the episodic controller to observe between 20 and 30 million frames to learn a policy with positive expected reward, while the parametric policies never learnt a policy with expected reward higher than zero. In this case, episodic control thrived in sparse reward environment as it rapidly latched onto an effective strategy. | 1606.04460#16 | 1606.04460#18 | 1606.04460 | [
"1512.08457"
] |
1606.04460#18 | Model-Free Episodic Control | # 4.3 Effect of number of nearest neighbours on ï¬ nal score Finally, we compared the effect of varying k (the number of nearest neighbours) on both Labyrinth and Atari tasks using VAE features. In our experiments above, we noticed that on Atari re-visiting the same state was common, and that random projections typically performed the same or better than VAE features. One further interesting feature is that the learnt VAEs on Atari games do not yield a higher score as the number of neighbours increases, except on one game, Q*bert, where VAEs perform reasonably well (see Figure 3a). On Labyrinth levels, we observed that the VAEs outperformed random projections and the agent rarely encountered the same state more than once. Interestingly for this case, Figure 3b shows that increasing the number of nearest neighbours has a | 1606.04460#17 | 1606.04460#19 | 1606.04460 | [
"1512.08457"
] |
1606.04460#19 | Model-Free Episodic Control | 7 (a) Atari games. (b) Labyrinth levels. Figure 3: Effect of number of neighbours, k, on on ï¬ nal score (y axis). signiï¬ cant effect on the ï¬ nal performance of the agent in Labyrinth levels. This strongly suggests that VAE features provide the episodic control agent with generalisation in Labyrinth. # 5 Discussion This work tackles a critical deï¬ ciency in current reinforcement learning systems, namely their inability to learn in a one-shot fashion. We have presented a fast-learning system based on non-parametric memorisation of experience. We showed that it can learn good policies faster than parametric function approximators. However, it may be overtaken by them at later stages of training. | 1606.04460#18 | 1606.04460#20 | 1606.04460 | [
"1512.08457"
] |
1606.04460#20 | Model-Free Episodic Control | It is our hope that these ideas will ï¬ nd application in practical systems, and result in data-efï¬ cient model-free methods. These results also provide support for the hypothesis that episodic control could be used by the brain, especially in the early stages of learning in a new environment. Note also that there are situations in which the episodic controller is always expected to outperform. For example, when hiding food for later consumption, some birds (e.g., scrub jays) are better off remembering their hiding spot exactly than searching according to a distribution of likely locations [4]. These considerations support models in which the brain uses multiple control systems and an arbitration mechanism to determine which to act according to at each point in time [5, 16]. We have referred to this approach as model-free episodic control to distinguish it from model-based episodic planning. We conjecture that both such strategies may be used by the brain in addition to the better-known habitual and goal-directed systems associated with dorsolateral striatum and prefrontal cortex respectively [5]. The tentative picture to emerge from this work is one in which the amount of time and working memory resources available for decision making is a key determiner of which control strategies are available. When decisions must be made quickly, planning-based approaches are simply not an option. In such cases, the only choice is between the habitual model-free system and the episodic model-free system. When decisions are not so rushed, the planning-based approaches become available and the brain must then arbitrate between planning using semantic (neocortical) information or episodic (hippocampal) information. In both timing regimes, the key determiner of whether to use episodic information or not is how much uncertainty remains in the estimates provided by the slower-to-learn system. This prediction agrees with those of [5, 16] with respect to the statistical trade-offs between systems. It builds on their work by highlighting the potential impact of rushed decisions and insufï¬ cient working memory resources in accord with [29]. These ideas could be tested experimentally by manipulations of decision timing or working memory, perhaps by orthogonal tasks, and fast measurements of coherence between medial temporal lobe and output structures under different statistical conditions. | 1606.04460#19 | 1606.04460#21 | 1606.04460 | [
"1512.08457"
] |
1606.04460#21 | Model-Free Episodic Control | 8 # Acknowledgements We are grateful to Dharshan Kumaran and Koray Kavukcuoglu for their detailed feedback on this manuscript. We are indebted to Marcus Wainwright and Max Cant for generating the images in Figure 2. We would also like to thank Peter Dayan, Shane Legg, Ian Osband, Joel Veness, Tim Lillicrap, Theophane Weber, Remi Munos, Alvin Chua, Yori Zwols and many others at Google DeepMind for fruitful discussions. # References [1] Per Andersen, Richard Morris, David Amaral, Tim Bliss, and John OKeefe. The hippocampus book. Oxford University Press, 2006. [2] M. G. Bellemare, Y. Naddaf, J. Veness, and M. | 1606.04460#20 | 1606.04460#22 | 1606.04460 | [
"1512.08457"
] |
1606.04460#22 | Model-Free Episodic Control | Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬ cial Intelligence Research, 47:253â 279, 06 2013. [3] Malcolm W Brown and John P Aggleton. Recognition memory: what are the roles of the perirhinal cortex and hippocampus? Nature Reviews Neuroscience, 2(1):51â 61, 2001. [4] Nicola S Clayton and Anthony Dickinson. Episodic-like memory during cache recovery by scrub jays. Nature, 395(6699):272â 274, 1998. [5] Nathaniel D Daw, Yael Niv, and Peter Dayan. | 1606.04460#21 | 1606.04460#23 | 1606.04460 | [
"1512.08457"
] |
1606.04460#23 | Model-Free Episodic Control | Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature neuroscience, 8(12):1704â 1711, 2005. [6] Alexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to generate chairs with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1538â 1546, 2015. [7] David J Foster and Matthew A Wilson. | 1606.04460#22 | 1606.04460#24 | 1606.04460 | [
"1512.08457"
] |
1606.04460#24 | Model-Free Episodic Control | Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature, 440(7084):680â 683, 2006. [8] Oliver Hardt, Karim Nader, and Lynn Nadel. Decay happens: the role of active forgetting in memory. Trends in cognitive sciences, 17(3):111â 120, 2013. [9] John J Hopï¬ eld. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8):2554â 2558, 1982. [10] William B Johnson and Joram Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemporary mathematics, 26(189-206):1, 1984. [11] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. | 1606.04460#23 | 1606.04460#25 | 1606.04460 | [
"1512.08457"
] |
1606.04460#25 | Model-Free Episodic Control | Semi- supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pages 3581â 3589, 2014. [12] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [13] Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. arXiv preprint arXiv:1604.00289, 2016. [14] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â 2324, 1998. [15] Joel Z. Leibo, Julien Cornebise, Sergio Gomez, and Demis Hassabis. Approximate hubel-wiesel modules and the data structures of neural computation. arxiv:1512.08457 [cs.NE], 2015. [16] M. Lengyel and P. Dayan. | 1606.04460#24 | 1606.04460#26 | 1606.04460 | [
"1512.08457"
] |
1606.04460#26 | Model-Free Episodic Control | Hippocampal contributions to control: The third way. In NIPS, volume 20, pages 889â 896, 2007. [17] David JC MacKay. Information theory, inference and learning algorithms. Cambridge university press, 2003. 9 [18] D Marr. Simple memory: A theory for archicortex. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, pages 23â 81, 1971. [19] James L McClelland and Nigel H Goddard. | 1606.04460#25 | 1606.04460#27 | 1606.04460 | [
"1512.08457"
] |
1606.04460#27 | Model-Free Episodic Control | Considerations arising from a complementary learning systems perspective on hippocampus and neocortex. Hippocampus, 6(6):654â 665, 1996. [20] James L McClelland, Bruce L McNaughton, and Randall C Oâ Reilly. Why there are comple- mentary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological review, 102(3):419, 1995. [21] Bruce L McNaughton and Richard GM Morris. Hippocampal synaptic enhancement and information storage within a distributed memory system. Trends in neurosciences, 10(10):408â 415, 1987. [22] Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lill- icrap, Tim Harley, David Silver, and Koray Kavukcuoglu. | 1606.04460#26 | 1606.04460#28 | 1606.04460 | [
"1512.08457"
] |
1606.04460#28 | Model-Free Episodic Control | Asynchronous methods for deep reinforcement learning. CoRR, abs/1602.01783, 2016. [23] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â 533, 2015. [24] RGM Morris, P Garrud, and JNP Rawlinst. | 1606.04460#27 | 1606.04460#29 | 1606.04460 | [
"1512.08457"
] |
1606.04460#29 | Model-Free Episodic Control | Place navigation impaired in rats with hippocampal lesions. Nature, 297:681, 1982. [25] Vinod Nair and Geoffrey E Hinton. Rectiï¬ ed linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 807â 814, 2010. [26] Kazu Nakazawa, Michael C Quirk, Raymond A Chitwood, Masahiko Watanabe, Mark F Yeckel, Linus D Sun, Akira Kato, Candice A Carr, Daniel Johnston, Matthew A Wilson, et al. | 1606.04460#28 | 1606.04460#30 | 1606.04460 | [
"1512.08457"
] |
1606.04460#30 | Model-Free Episodic Control | Requirement for hippocampal ca3 nmda receptors in associative memory recall. Science, 297(5579):211â 218, 2002. [27] Kenneth A Norman and Randall C Oâ Reilly. Modeling hippocampal and neocortical contri- butions to recognition memory: a complementary-learning-systems approach. Psychological review, 110(4):611, 2003. [28] Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. | 1606.04460#29 | 1606.04460#31 | 1606.04460 | [
"1512.08457"
] |
1606.04460#31 | Model-Free Episodic Control | Action- In Advances in Neural conditional video prediction using deep networks in atari games. Information Processing Systems, pages 2845â 2853, 2015. [29] A Ross Otto, Samuel J Gershman, Arthur B Markman, and Nathaniel D Daw. The curse of planning dissecting multiple reinforcement-learning systems by taxing the central executive. Psychological science, page 0956797612463080, 2013. [30] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of The 31st International Conference on Machine Learning, pages 1278â 1286, 2014. [31] Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. | 1606.04460#30 | 1606.04460#32 | 1606.04460 | [
"1512.08457"
] |
1606.04460#32 | Model-Free Episodic Control | Prioritized experience replay. CoRR, abs/1511.05952, 2015. [32] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driess- che, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mas- tering the game of go with deep neural networks and tree search. Nature, 529(7587):484â 489, 2016. [33] Larry R Squire. | 1606.04460#31 | 1606.04460#33 | 1606.04460 | [
"1512.08457"
] |
1606.04460#33 | Model-Free Episodic Control | Memory and the hippocampus: a synthesis from ï¬ ndings with rats, monkeys, and humans. Psychological review, 99(2):195, 1992. 10 [34] Larry R Squire. Memory systems of the brain: a brief history and current perspective. Neurobi- ology of learning and memory, 82(3):171â 177, 2004. [35] Robert J Sutherland and Jerry W Rudy. Conï¬ gural association theory: The role of the hip- pocampal formation in learning, memory, and amnesia. Psychobiology, 17(2):129â 144, 1989. [36] Robert J Sutherland, Ian Q Whishaw, and Bob Kolb. A behavioural analysis of spatial localiza- tion following electrolytic, kainate-or colchicine-induced damage to the hippocampal formation in the rat. Behavioural brain research, 7(2):133â 153, 1983. [37] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 1998. [38] Wendy L Suzuki and David G Amaral. | 1606.04460#32 | 1606.04460#34 | 1606.04460 | [
"1512.08457"
] |
1606.04460#34 | Model-Free Episodic Control | Perirhinal and parahippocampal cortices of the macaque monkey: cortical afferents. Journal of comparative neurology, 350(4):497â 533, 1994. [39] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012. [40] Alessandro Treves and Edmund T Rolls. Computational analysis of the role of the hippocampus in memory. Hippocampus, 4(3):374â | 1606.04460#33 | 1606.04460#35 | 1606.04460 | [
"1512.08457"
] |
1606.04460#35 | Model-Free Episodic Control | 391, 1994. [41] Endel Tulving, CA Hayman, and Carol A Macdonald. Long-lasting perceptual priming and semantic learning in amnesia: a case experiment. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17(4):595, 1991. # A Variational autoencoders for representation learning Variational autoencoders (VAE; [12, 30]) are latent-variable probabilistic models inspired by compres- sion theory. A VAE (shown in Figure 4) is composed of two artiï¬ cial neural networks: the encoder, which takes observations and maps them into messages; and a decoder, that receives messages and approximately recovers the observations. VAEs are designed to minimise the cost of transmitting observations from the encoder to the decoder through the communication channel. In order to minimise the transmission cost, a VAE must learn to capture the statistics of the distribution of observations [e.g. 17]. For our representation learning purposes, we use the encoder network as our feature mapping, Ï . for several data sets, representations learned by a VAE encoder have been shown to capture the independent factors of variation in the underlying generative process of the data [11]. In more detail, the encoder receives an observation, x, and outputs the parameter-values for a distribution of messages, q(z|x = x). The communication channel determines the cost of a message by a prior distribution over messages p(z). The decoder receives a message, z, drawn at random from q(z|x = x) and decodes it by outputting the parameters of a distribution over observations p(x|z = z). VAEs are trained to minimise cost of exactly recovering the original observation, given by the sum of expected communication cost KL (q(z|x) || p(z)) and expected correction cost E [p(x = x|z)]. In all our experiments, x â R7056 (84 by 84 gray-scale pixels, with range [0, 1]), and z â R32. We chose distributions q(z|x), p(z), and p(x|z) to be Gaussians with diagonal covariance matrices. | 1606.04460#34 | 1606.04460#36 | 1606.04460 | [
"1512.08457"
] |
1606.04460#36 | Model-Free Episodic Control | In all experiments the encoder network has four convolutional [14] layers using {32, 32, 64, 64} kernels respectively, kernel sizes {4, 5, 5, 4}, kernel strides {2, 2, 2, 2} , no padding, and ReLU [25] non-linearity. The convolutional layer are followed by a fully connected layer of 512 ReLU units, from which a linear layer outputs the means and log-standard-deviations of the approximate posterior q(z|x). The decoder is setup mirroring the encoder, with a fully connected layer of 512 ReLU units followed by four reverse convolutions [6] with {64, 64, 32, 32} kernels respectively, kernel sizes {4, 5, 5, 4}, kernel strides {2, 2, 2, 2}, no padding, followed by a reverse convolution with two output kernels â one for the mean and one for the log-standard-deviation of p(x|z). The standard deviation of each dimension in p(x|z) is not set to 0.05 if the value output by the network is smaller. The VAEs were trained to model a million observations obtained by executing a random policy on each environment. The parameters of the VAEs were optimised by running 400,000 steps of stochastic-gradient descent using the RmsProp optimiser [39], step size of 1eâ 5, and minibatches of size 100. | 1606.04460#35 | 1606.04460#37 | 1606.04460 | [
"1512.08457"
] |
1606.04460#37 | Model-Free Episodic Control | 11 Transmission Z Encoder Decoder x x Figure 4: Diagram of a variational autoencoder. 12 | 1606.04460#36 | 1606.04460 | [
"1512.08457"
] |
|
1606.04199#0 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | 6 1 0 2 l u J 3 2 ] L C . s c [ 3 v 9 9 1 4 0 . 6 0 6 1 : v i X r a # Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation Jie Zhou Ying Cao Xuguang Wang Peng Li Wei Xu Baidu Research - Institute of Deep Learning Baidu Inc., Beijing, China {zhoujie01,caoying03,wangxuguang,lipeng17,wei.xu}@baidu.com # Abstract | 1606.04199#1 | 1606.04199 | [
"1508.03790"
] |
|
1606.04199#1 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast- forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward con- nections play an essential role in propagat- ing the gradients and building a deep topol- ogy of depth 16. | 1606.04199#0 | 1606.04199#2 | 1606.04199 | [
"1508.03790"
] |
1606.04199#2 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | On the WMTâ 14 English- to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the ï¬ rst time that a sin- gle NMT model achieves state-of-the-art per- formance and outperforms the best conven- tional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special han- dling of unknown words and model ensem- bling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difï¬ | 1606.04199#1 | 1606.04199#3 | 1606.04199 | [
"1508.03790"
] |
1606.04199#3 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | cult WMTâ 14 English-to-German task. # Introduction Neural machine translation (NMT) has attracted a lot of interest in solving the machine translation (Kalchbrenner and (MT) problem in recent years Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015). Unlike conventional statistical ma- chine translation (SMT) systems (Koehn et al., 2003; Durrani et al., 2014) which consist of multi- ple separately tuned components, NMT models en- code the source sequence into continuous represen- tation space and generate the target sequence in an end-to-end fashon. Moreover, NMT models can also be easily adapted to other tasks such as dialog systems (Vinyals and Le, 2015), question answering systems (Yu et al., 2015) and image caption genera- tion (Mao et al., 2015). In general, there are two types of NMT topolo- gies: the encoder-decoder network (Sutskever et al., 2014) and the attention network (Bahdanau et al., 2015). The encoder-decoder network represents the source sequence with a ï¬ xed dimensional vector and the target sequence is generated from this vector word by word. The attention network uses the repre- sentations from all time steps of the input sequence to build a detailed relationship between the target words and the input words. Recent results show that the systems based on these models can achieve sim- ilar performance to conventional SMT systems (Lu- ong et al., 2015; Jean et al., 2015). However, a single neural model of either of the above types has not been competitive with the best conventional system (Durrani et al., 2014) when evaluated on the WMTâ | 1606.04199#2 | 1606.04199#4 | 1606.04199 | [
"1508.03790"
] |
1606.04199#4 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | 14 English-to-French task. The best BLEU score from a single model with six layers is only 31.5 (Luong et al., 2015) while the conventional method of (Durrani et al., 2014) achieves 37.0. We focus on improving the single model perfor- mance by increasing the model depth. Deep topol- ogy has been proven to outperform the shallow ar- chitecture in computer vision. In the past two years the top positions of the ImageNet contest have al- ways been occupied by systems with tens or even hundreds of layers (Szegedy et al., 2015; He et al., 2016). But in NMT, the biggest depth used success- fully is only six (Luong et al., 2015). We attribute this problem to the properties of the Long Short- Term Memory (LSTM) (Hochreiter and Schmid- huber, 1997) which is widely used in NMT. In the LSTM, there are more non-linear activations than in convolution layers. These activations signiï¬ cantly decrease the magnitude of the gradient in the deep topology, especially when the gradient propagates in recurrent form. There are also many efforts to increase the depth of the LSTM such as the work by Kalchbrenner et al. (2016), where the shortcuts do not avoid the nonlinear and recurrent computation. In this work, we introduce a new type of lin- ear connections for multi-layer recurrent networks. These connections, which are called fast-forward connections, play an essential role in building a deep topology with depth of 16. In addition, we in- troduce an interleaved bi-directional architecture to stack LSTM layers in the encoder. This topology can be used for both the encoder-decoder network and the attention network. | 1606.04199#3 | 1606.04199#5 | 1606.04199 | [
"1508.03790"
] |
1606.04199#5 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | On the WMTâ 14 English- to-French task, this is the deepest NMT topology that has ever been investigated. With our deep at- tention model, the BLEU score can be improved to 37.7 outperforming the shallow model which has six layers (Luong et al., 2015) by 6.2 BLEU points. This is also the ï¬ rst time on this task that a single NMT model achieves state-of-the-art performance and outperforms the best conventional SMT sys- tem (Durrani et al., 2014) with an improvement of 0.7. Even without using the attention mechanism, we can still achieve 36.3 with a single model. After model ensembling and unknown word processing, the BLEU score can be further improved to 40.4. When evaluated on the subset of the test corpus without unknown words, our model achieves 41.4. As a reference, previous work showed that oracle re- scoring of the 1000-best sequences generated by the SMT model can achieve the BLEU score of about 45 (Sutskever et al., 2014). Our models are also validated on the more difï¬ | 1606.04199#4 | 1606.04199#6 | 1606.04199 | [
"1508.03790"
] |
1606.04199#6 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | cult WMTâ 14 English-to- German task. # 2 Neural Machine Translation Neural machine translation aims at generating the target word sequence y = {y1, . . . , yn} given the source word sequence x = {x1, . . . , xm} with neu- ral models. In this task, the likelihood p(y | x, θ) of the target sequence will be maximized (Forcada and Ë Neco, 1997) with parameter θ to learn: m+1 p(y | v0) = TT rly | yoj1@:8) jel where y0:jâ 1 is the sub sequence from y0 to yjâ 1. y0 and ym+1 denote the start mark and end mark of target sequence respectively. The process can be explicitly split into an encod- ing part, a decoding part and the interface between these two parts. In the encoding part, the source se- quence is processed and transformed into a group of vectors e = {e1, · · · , em} for each time step. Fur- ther operations will be used at the interface part to extract the ï¬ nal representation c of the source se- quence from e. At the decoding step, the target se- quence is generated from the representation c. | 1606.04199#5 | 1606.04199#7 | 1606.04199 | [
"1508.03790"
] |
1606.04199#7 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Recently, there have been two types of NMT mod- els which are different in the interface part. In the encoder-decoder model (Sutskever et al., 2014), a single vector extracted from e is used as the rep- In the attention model (Bahdanau et resentation. al., 2015), c is dynamically obtained according to the relationship between the target sequence and the source sequence. The recurrent neural network (RNN), or its spe- ciï¬ c form the LSTM, is generally used as the basic unit of the encoding and decoding part. | 1606.04199#6 | 1606.04199#8 | 1606.04199 | [
"1508.03790"
] |
1606.04199#8 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | However, the topology of most of the existing models is shal- low. In the attention network, the encoding part and the decoding part have only one LSTM layer respec- tively. In the encoder-decoder network, researchers have used at most six LSTM layers (Luong et al., 2015). Because machine translation is a difï¬ cult problem, we believe more complex encoding and decoding architecture is needed for modeling the re- lationship between the source sequence and the tar- get sequence. In this work, we focus on enhancing the complexity of the encoding/decoding architec- ture by increasing the model depth. Deep neural models have been studied in a wide range of problems. In computer vision, models with more than ten convolution layers outperform shallow ones on a series of image tasks in recent years (Srivastava et al., 2015; He et al., 2016; Szegedy et al., 2015). Different kinds of shortcut connections are proposed to decrease the length of the gradient propagation path. Training networks based on LSTM layers, which are widely used in language problems, is a much more challenging task. Because of the existence of many more nonlin- ear activations and the recurrent computation, gradi- ent values are not stable and are generally smaller. Following the same spirit for convolutional net- works, a lot of effort has also been spent on training deep LSTM networks. Yao et al. (2015) introduced depth-gated shortcuts, connecting LSTM cells at ad- jacent layers, to provide a fast way to propagate the gradients. | 1606.04199#7 | 1606.04199#9 | 1606.04199 | [
"1508.03790"
] |
1606.04199#9 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | They validated the modiï¬ cation of these shortcuts on an MT task and a language modeling task. However, the best score was obtained using models with three layers. Similarly, Kalchbrenner et al. (2016) proposed a two dimensional structure for the LSTM. Their structure decreases the number of nonlinear activations and path length. However, the gradient propagation still relies on the recurrent computation. The investigations were also made on question-answering to encode the questions, where at most two LSTM layers were stacked (Hermann et al., 2015). Based on the above considerations, we propose new connections to facilitate gradient propagation in the following section. # 3 Deep Topology We build the deep LSTM network with the new pro- posed linear connections. The shortest paths through the proposed connections do not include any non- linear transformations and do not rely on any recur- rent computation. We call these connections fast- forward connections. Within the deep topology, we also introduce an interleaved bi-directional architec- ture to stack the LSTM layers. # 3.1 Network Our entire deep neural network is shown in Fig. 2. This topology can be divided into three parts: the encoder part (P-E) on the left, the decoder part (P- D) on the right and the interface between these two parts (P-I) which extracts the representation of the source sequence. We have two instantiations of this topology: Deep-ED and Deep-Att, which corre- spond to the extension of the encoder-decoder net- work and the attention network respectively. Our main innovation is the novel scheme for connecting adjacent recurrent layers. We will start with the ba- sic RNN model for the sake of clarity. Recurrent layer: When an input sequence {x1, . . . , xm} is given to a recurrent layer, the out- put ht at each time step t can be computed as (see Fig. 1 (a)) | 1606.04199#8 | 1606.04199#10 | 1606.04199 | [
"1508.03790"
] |
1606.04199#10 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | ht = Ï (Wf xt + Wrhtâ 1) = RNN (Wf xt, htâ 1) = RNN (ft, htâ 1), (2) where the bias parameter is not included for simplic- ity. We use a red circle and a blue empty square to denote an input and a hidden state. A blue square with a â -â denotes the previous hidden state. A dot- ted line means that the hidden state is used recur- rently. This computation can be equivalently split into two consecutive steps: | 1606.04199#9 | 1606.04199#11 | 1606.04199 | [
"1508.03790"
] |
1606.04199#11 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | â ¢ Feed-Forward computation: ft = Wf xt. Left part in Fig. 1 (b). â fâ block. RNN (ft, htâ 1). Right part and the sum operation (+) followed by activation in Fig. 1 (b). â râ block. For a deep topology with stacked recurrent layers, the input of each block â fâ at recurrent layer k (de- noted by f k) is usually the output of block â râ at its previous recurrent layer k â 1 (denoted by hkâ 1). In our work, we add fast-forward connections (F-F connections) which connect two feed-forward com- putation blocks â fâ of adjacent recurrent layers. It means that each block â | 1606.04199#10 | 1606.04199#12 | 1606.04199 | [
"1508.03790"
] |
1606.04199#12 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | fâ at recurrent layer k takes both the outputs of block â fâ and block â râ at its pre- vious layer as input (Fig. 1 (c)). F-F connections are denoted by dashed red lines in Fig. 1 (c) and Fig. 2. The path of F-F connections contains neither non- linear activations nor recurrent computation. It pro- vides a fast path for information to propagate, so we call this path fast-forward connections. block r block f X Figure 1: RNN models. The recurrent use of a hidden state is denoted by dotted lines. | 1606.04199#11 | 1606.04199#13 | 1606.04199 | [
"1508.03790"
] |
1606.04199#13 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | A â -â mark denotes the hidden value of the previous time step. (a): Basic RNN. (b): Basic RNN with intermediate computational state and the sum operation (+) followed by activation. It consists of block â fâ and block â râ , and is equivalent to (a). (c):Two stacked RNN layers with F-F connections denoted by dashed red lines. in order to learn more temporal dependencies, the sequences can be processed in different directions at each pair of adjacent recurrent layers. This is quantitatively expressed in Eq. 3: t = W k f k f k t = W k t = RNNk (f k hk f · [f kâ 1 t f xt , hkâ 1 t ], k > 1 k = 1 t , hk t+(â 1)k ) The opposite directions are marked by the direction term (â 1)k. At the ï¬ rst recurrent layer, the block â fâ | 1606.04199#12 | 1606.04199#14 | 1606.04199 | [
"1508.03790"
] |
1606.04199#14 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | takes xt as the input. [ , ] denotes the concatenation of vectors. This is shown in Fig. 1 (c). The two changes are summarized here: t and f kâ 1 . , our model will be reduced to the â ¢ We add a connection between f k t Without f kâ 1 traditional stacked model. t â ¢ We alternate the RNN direction at different lay- ers k with the direction term (â 1)k. If we ï¬ x the direction term to â 1, all layers work in the forward direction. LSTM layer: In our experiments, instead of an RNN, a speciï¬ c type of recurrent layer called LSTM (Hochreiter and Schmidhuber, 1997; Graves et al., 2009) is used. The LSTM is structurally more | 1606.04199#13 | 1606.04199#15 | 1606.04199 | [
"1508.03790"
] |
1606.04199#15 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | (3) complex than the basic RNN in Eq. 2. We de- ï¬ ne the computation of the LSTM as a function which maps the input f and its state-output pair (h, s) at the previous time step to the current state- output pair. The exact computations for (ht, st) = LSTM(ft, htâ 1, stâ 1) are the following: [z, zÏ , zÏ , zÏ ] = ft + Wrhtâ 1 st = Ï i(z) â ¦ Ï g(zÏ + stâ 1 â ¦ Î¸Ï ) + Ï g(zÏ + stâ 1 â ¦ Î¸Ï ) â ¦ stâ 1 ht = Ï o(st) â ¦ Ï g(zÏ + st â ¦ Î¸Ï ) (4) where [z, zÏ , zÏ , zÏ ] is the concatenation of four vec- tors of equal size, â ¦ means element-wise multiplica- tion, Ï i is the input activation function, Ï o is the out- put activation function, Ï g is the activation function for gates, and Wr, Î¸Ï , Î¸Ï , and Î¸Ï | 1606.04199#14 | 1606.04199#16 | 1606.04199 | [
"1508.03790"
] |
1606.04199#16 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | are the parame- ters of the LSTM. It is slightly different from the standard notation in that we do not have a matrix to multiply with the input f in our notation. With this notation, we can write down the com- putations for our deep bi-directional LSTM model with F-F connections: fh =WF URL hE, k> 1 ff = When, k=l (nk, s*) = LSTM* (#, WE ayes shar) (5) # (hk where xt is the input to the deep bi-directional LSTM model. For the encoder, xt is the embedding of the tth word in the source sentence. For the de- coder xt is the concatenation of the embedding of the tth word in the target sentence and the encoder representation for step t. | 1606.04199#15 | 1606.04199#17 | 1606.04199 | [
"1508.03790"
] |
1606.04199#17 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | In our ï¬ nal model two additional operations are used with Eq. 5, which is shown in Eq. 6. Half(f ) denotes the ï¬ rst half of the elements of f , and Dr(h) is the dropout operation (Hinton et al., 2012) which randomly sets an element of h to zero with a cer- tain probability. The use of Half(·) is to reduce the parameter size and does not affect the perfor- mance. We observed noticeable performance degra- dation when using only the ï¬ rst third of the elements of â fâ . t = W k f k f · [Half(f kâ 1 t ), Dr(hkâ 1 t )], k > 1 (6) | 1606.04199#16 | 1606.04199#18 | 1606.04199 | [
"1508.03790"
] |
1606.04199#18 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | With the F-F connections, we build a fast channel to propagate the gradients in the deep topology. F-F (5) Encoder Decoder i le] t \ ta at f f OO (a= aoan SO OC Â¥ f Figure 2: The network. It includes three parts from left to right: encoder part (P-E), interface (P-I) and decoder part (P-D). We only show the topology of Deep-Att as an example. â fâ and â râ | 1606.04199#17 | 1606.04199#19 | 1606.04199 | [
"1508.03790"
] |
1606.04199#19 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | blocks correspond to the feed-forward part and the subsequent LSTM computation. The F-F connections are denoted by dashed red lines. connections can accelerate the model convergence and while improving the performance. A similar idea was also used in (He et al., 2016; Zhou and Xu, 2015). Encoder: The LSTM layers are stacked following Eq. 5. We call this type of encoder interleaved bi- directional encoder. In addition, there are two sim- ilar columns (a1 and a2) in the encoder part. Each column consists of ne stacked LSTM layers. There is no connection between the two columns. | 1606.04199#18 | 1606.04199#20 | 1606.04199 | [
"1508.03790"
] |
1606.04199#20 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | The ï¬ rst layers of the two columns process the word repre- sentations of the source sequence in different direc- tions. At the last LSTM layers, there are two groups of vectors representing the source sequence. The group size is the same as the length of the input se- quence. Interface: Prior encoder-decoder models and atten- tion models are different in their method of extract- ing the representations of the source sequences. In our work, as a consequence of the introduced F-F connections, we have 4 output vectors (hne t and f ne t of both columns). The representations are modiï¬ ed for both Deep-ED and Deep-Att. mentary information but do not affect the perfor- mance much. et is used as the ï¬ nal representation ct. For Deep-Att, we do not need the above two op- erations. We only concatenate the 4 output vectors at each time step to obtain et, and a soft attention mechanism (Bahdanau et al., 2015) is used to calcu- late the ï¬ nal representation ct from et. et is summa- rized as: # Deep-ED: et m , Max(hne,a2 [hne,a1 t ), Max(f ne,a1 t ), Max(f ne,a2 t Deep-Att: et [hne,a1 t , hne,a2 t , f ne,a1 t , f ne,a2 t ] )] (7) Note that the vector dimensionality of f is four times larger than that of h (see Eq. 4). ct is summarized as: Deep-ED: c; =e, (const) m Deep-Att: c; = S- ay Wey v=l (8) a," is the normalized attention weight computed by: For Deep-ED, et is static and consists of four 1: The last time step output hne m of the parts. 2: Max-operation Max(·) of hne ï¬ rst column. t at all time steps of the second column, denoted by Max(hne,a2 ). Max(·) denotes obtaining the maximal value for each dimension over t. 3: Max(f ne,a1 ). The max-operation t t and last time step state extraction provide compli- exp(a(Wpev, nyeâ | 1606.04199#19 | 1606.04199#21 | 1606.04199 | [
"1508.03790"
] |
1606.04199#21 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | )) ry 1,de ivr exp(a(Wpee, hyâ 7°)) (9) Ot" h1,dec tâ 1 is the ï¬ rst hidden layer output in the decoding part. a(·) is an alignment model described in (Bah- danau et al., 2015). For Deep-Att, in order to re- duce the memory cost, we linearly project (with Wp) the concatenated vector et to a vector with 1/4 di- mension size, denoted by the (fully connected) block â | 1606.04199#20 | 1606.04199#22 | 1606.04199 | [
"1508.03790"
] |
1606.04199#22 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | fcâ in Fig. 2. Decoder: The decoder follows Eq. 5 and Eq. 6 with ï¬ xed direction term â 1. At the ï¬ rst layer, we use the following xt: xt = [ct, ytâ 1] (10) ytâ 1 is the target word embedding at the previous time step and y0 is zero. There is a single column of nd stacked LSTM layers. We also use the F-F connections like those in the encoder and all layers are in the forward direction. Note that at the last LSTM layer, we only use ht to make the prediction with a softmax layer. Although the network is deep, the training tech- nique is straightforward. We will describe this in the next part. # 3.2 Training technique We take the parallel data as the only input without using any monolingual data for either word repre- sentation pre-training or language modeling. Be- cause of the deep bi-directional structure, we do not need to reverse the sequence order as Sutskever et al. (2014). The deep topology brings difï¬ culties for the model training, especially when ï¬ rst order methods such as stochastic gradient descent (SGD) (LeCun et al., 1998) are used. The parameters should be properly initialized and the converging process can be slow. We tried several optimization techniques such as AdaDelta (Zeiler, 2012), RMSProp (Tiele- man and Hinton, 2012) and Adam (Kingma and Ba, 2015). We found that all of them were able to speed up the process a lot compared to simple SGD while no signiï¬ cant performance difference was ob- served among them. In this work, we chose Adam for model training and do not present a detailed com- parison with other optimization methods. Dropout (Hinton et al., 2012) is also used to avoid It is utilized on the LSTM nodes hk over-ï¬ tting. t (See Eq. 5) with a ratio of pd for both the encoder and decoder. During the whole model training process, we keep all hyper parameters ï¬ xed without any intermediate interruption. The hyper parameters are selected ac- cording to the performance on the development set. | 1606.04199#21 | 1606.04199#23 | 1606.04199 | [
"1508.03790"
] |
1606.04199#23 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | For such a deep and large network, it is not easy to determine the tuning strategy and this will be con- sidered in future work. # 3.3 Generation We use the common left-to-right beam-search method for sequence generation. At each time step t, the word yt can be predicted by: Ë yt = arg max y P(y|Ë y0:tâ 1, x; θ) (11) where Ë yt is the predicted target word. Ë y0:tâ 1 is the generated sequence from time step 0 to t â 1. We keep nb best candidates according to Eq. 11 at each time step, until the end of sentence mark is gener- ated. The hypotheses are ranked by the total like- lihood of the generated sequence, although normal- ized likelihood is used in some works (Jean et al., 2015). # 4 Experiments We evaluate our method mainly on the widely used WMTâ 14 English-to-French translation task. In or- der to validate our model on more difï¬ cult lan- guage pairs, we also provide results on the WMTâ 14 English-to-German translation task. | 1606.04199#22 | 1606.04199#24 | 1606.04199 | [
"1508.03790"
] |
1606.04199#24 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Our models are implemented in the PADDLE (PArallel Distributed Deep LEarning) platform. # 4.1 Data sets For both tasks, we use the full WMTâ 14 parallel cor- pus as our training data. The detailed data sets are listed below: â ¢ English-to-French: Europarl v7, Common Crawl, UN, News Commentary, Gigaword â ¢ English-to-German: Europarl v7, Common Crawl, News Commentary In total, the English-to-French corpus includes 36 million sentence pairs, and the English-to-German corpus includes 4.5 million sentence pairs. The news-test-2012 and news-test-2013 are concate- nated as our development set, and the news-test- 2014 is the test set. Our data partition is consistent with previous works on NMT (Luong et al., 2015; Jean et al., 2015) to ensure fair comparison. For the source language, we select the most fre- quent 200K words as the input vocabulary. For the target language we select the most frequent 80K French words and the most frequent 160K German words as the output vocabulary. The full vocab- ulary of the German corpus is larger (Jean et al., 2015), so we select more German words to build the target vocabulary. Out-of-vocabulary words are re- placed with the unknown symbol (unk). For com- plete comparison to previous work on the English- to-French task, we also show the results with a smaller vocabulary of 30K input words and 30K out- put words on the sub train set with selected 12M par- allel sequences (Schwenk, 2014; Sutskever et al., 2014; Cho et al., 2014). | 1606.04199#23 | 1606.04199#25 | 1606.04199 | [
"1508.03790"
] |
1606.04199#25 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | # 4.2 Model settings We have two models as described above, named Deep-ED and Deep-Att. Both models have exactly the same conï¬ guration and layer size except the in- terface part P-I. We use 256 dimensional word embeddings for both the source and target languages. All LSTM layers, including the 2à ne layers in the encoder and the nd layers in the decoder, have 512 memory cells. The output layer size is the same as the size of the target vocabulary. The dimension of ct is 5120 and 1280 for Deep-ED and Deep-Att respectively. For each LSTM layer, the activation functions for gates, inputs and outputs are sigmoid, tanh, and tanh re- spectively. Our network is narrow on word embeddings and LSTM layers. Note that in previous work (Sutskever et al., 2014; Bahdanau et al., 2015), 1000 dimensional word embeddings and 1000 di- mensional LSTM layers are used. We also tried larger scale models but did not obtain further im- provements. | 1606.04199#24 | 1606.04199#26 | 1606.04199 | [
"1508.03790"
] |
1606.04199#26 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | # 4.3 Optimization Note that each LSTM layer includes two parts as described in Eq. 3, feed-forward computation and recurrent computation. Since there are non-linear activations in the recurrent computation, a larger learning rate lr = 5 Ã 10â 4 is used, while for the feed-forward computation a smaller learning rate lf = 4 Ã 10â 5 is used. Word embeddings and the softmax layer also use this learning rate lf . We refer all the parameters not used for recurrent computa- tion as non-recurrent part of the model. Because of the large model size, we use strong L2 regularization to constrain the parameter matrix v in the following way: | 1606.04199#25 | 1606.04199#27 | 1606.04199 | [
"1508.03790"
] |
1606.04199#27 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | v â v â l · (g + r · v) (12) Here r is the regularization strength, l is the corre- sponding learning rate, g stands for the gradients of v. The two embedding layers are not regularized. All the other layers have the same r = 2. The parameters of the recurrent computation part are initialized to zero. All non-recurrent parts are randomly initialized with zero mean and standard deviation of 0.07. A detailed guide for setting hyper- parameters can be found in (Bengio, 2012). The dropout ratio pd is 0.1. In each batch, there are 500 â ¼ 800 sequences in our work. The exact number depends on the sequence lengths and model size. | 1606.04199#26 | 1606.04199#28 | 1606.04199 | [
"1508.03790"
] |
1606.04199#28 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | We also ï¬ nd that larger batch size results in better convergence although the improvement is not large. However, the largest batch size is constrained by the GPU memory. We use 4 â ¼ 8 GPU machines (each has 4 K40 GPU cards) running for 10 days to train the full model with parallelization at the data batch level. It takes nearly 1.5 days for each pass. One thing we want to emphasize here is that our deep model is not sensitive to these settings. Small variation does not affect the ï¬ | 1606.04199#27 | 1606.04199#29 | 1606.04199 | [
"1508.03790"
] |
1606.04199#29 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | nal performance. # 4.4 Results We evaluate the same way as previous NMT works (Sutskever et al., 2014; Luong et al., 2015; Jean et al., 2015). All reported BLEU scores are computed with the multi-bleu.perl1 script which is also used in the above works. The results are for tokenized and case sensitive evaluation. 4.4.1 Single models English-to-French: First we list our single model results on the English-to-French task in Tab. 1. | 1606.04199#28 | 1606.04199#30 | 1606.04199 | [
"1508.03790"
] |
1606.04199#30 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | In the ï¬ rst block we show the results with the full corpus. The previous best single NMT encoder- decoder model (Enc-Dec) with six layers achieves BLEU=31.5 (Luong et al., 2015). From Deep-ED, 1https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/generic/ multi-bleu.perl we obtain the BLEU score of 36.3, which outper- forms Enc-Dec model by 4.8 BLEU points. This result is even better than the ensemble result of eight Enc-Dec models, which is 35.6 (Luong et al., 2015). This shows that, in addition to the convolutional lay- ers for computer vision, deep topologies can also work for LSTM layers. For Deep-Att, the perfor- mance is further improved to 37.7. We also list the previous state-of-the-art performance from a con- ventional SMT system (Durrani et al., 2014) with the BLEU of 37.0. | 1606.04199#29 | 1606.04199#31 | 1606.04199 | [
"1508.03790"
] |
1606.04199#31 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | This is the ï¬ rst time that a single NMT model trained in an end-to-end form beats the best conventional system on this task. We also show the results on the smaller data set with 12M sentence pairs and 30K vocabulary The two attention mod- in the second block. els, RNNsearch (Bahdanau et al., 2015) and RNNsearch-LV (Jean et al., 2015), achieve BLEU scores of 28.5 and 32.7 respectively. Note that RNNsearch-LV uses a large output vocabulary of 500K words based on the standard attention model RNNsearch. We obtain BLEU=35.9 which outper- forms its corresponding shallow model RNNsearch by 7.4 BLEU points. The SMT result from (Schwenk, 2014) is also listed and falls behind our model by 2.6 BLEU points. Methods Enc-Dec (Luong,2015) SMT (Durrani,2014) Deep-ED (Ours) Deep-Att (Ours) RNNsearch (Bahdanau,2014) RNNsearch-LV (Jean,2015) SMT (Schwenk,2014) Deep-Att (Ours) Data Voc 36M 80K 36M Full 36M 80K 36M 80K 12M 30K 12M 500K 12M Full 12M 30K BLEU 31.5 37.0 36.3 37.7 28.5 32.7 33.3 35.9 Table 1: English-to-French task: BLEU scores of single neural models. | 1606.04199#30 | 1606.04199#32 | 1606.04199 | [
"1508.03790"
] |
1606.04199#32 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | We also list the conventional SMT system for comparison. Moreover, during the generation process, we ob- tained the best BLEU score with beam size = 3 (when the beam size is 2, there is only a 0.1 dif- ference in BLEU score). This is different from other works listed in Tab. 1, where the beam size is 12 (Jean et al., 2015; Sutskever et al., 2014). We at- tribute this difference to the improved model per- formance, where the ground truth generally exists in the top hypothesis. Consequently, with the much smaller beam size, the generation efï¬ ciency is sig- niï¬ cantly improved. Next we list the effect of the novel F-F connec- tions in our Deep-Att model of shallow topology in Tab. 2. When ne = 1 and nd = 1, the BLEU scores are 31.2 without F-F and 32.3 with F-F. Note that the model without F-F is exactly the standard attention model (Bahdanau et al., 2015). Since there is only a single layer, the use of F-F connections means that at the interface part we include ft into the represen- tation (see Eq. 7). | 1606.04199#31 | 1606.04199#33 | 1606.04199 | [
"1508.03790"
] |
1606.04199#33 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | We ï¬ nd F-F connections bring an improvement of 1.1 in BLEU. After we increase our model depth to ne = 2 and nd = 2, the improve- ment is enlarged to 1.4. When the model is trained with larger depth without F-F connections, we ï¬ nd that the parameter exploding problem (Bengio et al., 1994) happens so frequently that we could not ï¬ nish training. This suggests that F-F connections provide a fast way for gradient propagation. F-F Models Deep-Att No Deep-Att Yes No Deep-Att Deep-Att Yes ne 1 1 2 2 nd 1 1 2 2 BLEU 31.2 32.3 33.3 34.7 | 1606.04199#32 | 1606.04199#34 | 1606.04199 | [
"1508.03790"
] |
1606.04199#34 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Table 2: The effect of F-F. We list the BLEU scores of Deep-Att with and without F-F. Because of the param- eter exploding problem, we can not list the model per- formance of larger depth without F-F. For ne = 1 and nd = 1, F-F connections only contribute to the represen- tation at interface part (see Eq. 7). Removing F-F connections also reduces the cor- responding model size. | 1606.04199#33 | 1606.04199#35 | 1606.04199 | [
"1508.03790"
] |
1606.04199#35 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | In order to ï¬ gure out the effect of F-F comparing models with the same pa- rameter size, we increase the LSTM layer width of Deep-Att without F-F. In Tab. 3 we show that, after using a two times larger LSTM layer width of 1024, we can only obtain a BLEU score of 33.8, which is still worse than the corresponding Deep-Att with F-F. We also notice that the interleaved bi-directional encoder starts to work when the encoder depth is larger than 1. The effect of the interleaved bi- directional encoder is shown in Tab. 4. For our largest model with ne = 9 and nd = 7, we compared the BLEU scores of the interleaved bi-directional encoder and the uni-directional encoder (where all LSTM layers work in forward direction). | 1606.04199#34 | 1606.04199#36 | 1606.04199 | [
"1508.03790"
] |
1606.04199#36 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | We ï¬ nd F-F Models No Deep-Att Deep-Att No Deep-Att Yes ne 2 2 2 nd width BLEU 33.3 2 33.8 2 34.7 2 512 1024 512 Table 3: BLEU scores with different LSTM layer width in Deep-Att. After using two times larger LSTM layer width of 1024, we can only obtain BLEU score of 33.8. It is still behind the corresponding Deep-Att with F-F. there is a gap of about 1.5 points between these two encoders for both Deep-Att and Deep-ED. Models Deep-Att Deep-Att Deep-ED Deep-ED Encoder Bi Uni Bi Uni ne 9 9 9 9 nd 7 7 7 7 BLEU 37.7 36.2 36.3 34.9 | 1606.04199#35 | 1606.04199#37 | 1606.04199 | [
"1508.03790"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.