id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1606.04199#37 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Table 4: The effect of the interleaved bi-directional en- coder. We list the BLEU scores of our largest Deep-Att and Deep-ED models. The encoder term Bi denotes that the interleaved bi-directional encoder is used. Uni de- notes a model where all LSTM layers work in forward direction. Next we look into the effect of model depth. In Tab. 5, starting from ne = 1 and nd = 1 and gradu- ally increasing the model depth, we signiï¬ cantly in- crease BLEU scores. With ne = 9 and nd = 7, the best score for Deep-Att is 37.7. We tried to increase the LSTM width based on this, but obtained little improvement. As we stated in Sec.2, the complexity of the encoder and decoder, which is related to the model depth, is more important than the model size. We also tried a larger depth, but the results started to get worse. With our topology and training tech- nique, ne = 9 and nd = 7 is the best depth we can achieve. Models F-F Deep-Att Yes Deep-Att Yes Deep-Att Yes Deep-Att Yes Deep-Att Yes ne 1 2 5 9 9 nd 1 2 3 7 7 Col BLEU 32.3 2 34.7 2 36.0 2 37.7 2 36.6 1 Table 5: BLEU score of Deep-Att with different model depth. With ne = 1 and nd = 1, F-F connections only contribute to the representation at interface part where ft is included (see Eq. 7). The last line in Tab. 5 shows the BLEU score of 36.6 of our deepest model, where only one encod- ing column (Col = 1) is used. | 1606.04199#36 | 1606.04199#38 | 1606.04199 | [
"1508.03790"
] |
1606.04199#38 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | We ï¬ nd a 1.1 BLEU points degradation with a single encoding column. Note that the uni-directional models in Tab. 4 with In uni-direction still have two encoding columns. order to ï¬ nd out whether this is caused by the de- creased parameter size, we test a wider model with It is 1024 memory blocks for the LSTM layers. shown in Tab. 6 that there is a minor improvement of only 0.1. We attribute this to the complementary in- formation provided by the double encoding column. Models F-F Deep-Att Yes Deep-Att Yes Deep-Att Yes ne 9 9 9 nd 7 7 7 Col width BLEU 37.7 2 36.6 1 36.7 1 512 512 1024 Table 6: Comparison of encoders with different number of columns and LSTM layer width. | 1606.04199#37 | 1606.04199#39 | 1606.04199 | [
"1508.03790"
] |
1606.04199#39 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | English-to-German: We also validate our deep The topology on the English-to-German task. English-to-German task is considered a relatively more difï¬ cult task, because of the lower similarity between these two languages. Since the German vo- cabulary is much larger than the French vocabulary, we select 160K most frequent words as the target vo- cabulary. All the other hyper parameters are exactly the same as those in the English-to-French task. We list our single model Deep-Att performance in Tab. 7. Our single model result with BLEU=20.6 is similar to the conventional SMT result of 20.7 (Buck et al., 2014). We also outperform the shallow at- tention models as shown in the ï¬ rst two lines in Tab. 7. All the results are consistent with those in the English-to-French task. Methods RNNsearch (Jean,2015) RNNsearch-LV (Jean,2015) SMT (Buck,2014) Deep-Att (Ours) Data Voc 4.5M 50K 4.5M 500K 4.5M Full 4.5M 160K BLEU 16.5 16.9 20.7 20.6 Table 7: | 1606.04199#38 | 1606.04199#40 | 1606.04199 | [
"1508.03790"
] |
1606.04199#40 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | English-to-German task: BLEU scores of single neural models. We also list the conventional SMT system for comparison. # 4.4.2 Post processing Two post processing techniques are used to im- prove the performance further on the English-to- French task. First, three Deep-Att models are built for ensem- ble results. They are initialized with different ran- in addition, the training corpus dom parameters; for these models is shufï¬ ed with different random seeds. We sum over the predicted probabilities of the target words and normalize the ï¬ nal distribution to generate the next word. It is shown in Tab. 8 that the model ensemble can improve the performance further to 38.9. In Luong et al. (2015) and Jean et al. (2015) there are eight models for the best scores, but we only use three models and we do not obtain further gain from more models. Methods Model Deep-ED Single Single Deep-Att Single+PosUnk Deep-Att Ensemble Deep-Att Ensemble+PosUnk Deep-Att Durrani, 2014 SMT Ensemble+PosUnk Enc-Dec Data Voc 36M 80K 36M 80K 36M 80K 36M 80K 36M 80K 36M Full 36M 80K BLEU 36.3 37.7 39.2 38.9 40.4 37.0 37.5 Table 8: BLEU scores of different models. The ï¬ rst two blocks are our results of two single models and mod- els with post processing. In the last block we list two baselines of the best conventional SMT system and NMT system. Second, we recover the unknown words in the generated sequences with the Positional Unknown (PosUnk) model introduced in (Luong et al., 2015). The full parallel corpus is used to obtain the word mappings (Liang et al., 2006). | 1606.04199#39 | 1606.04199#41 | 1606.04199 | [
"1508.03790"
] |
1606.04199#41 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | We ï¬ nd this method provides an additional 1.5 BLEU points, which is consistent with the conclusion in Luong et al. (2015). We obtain the new BLEU score of 39.2 with a single Deep-Att model. For the ensemble models of Deep-Att, the BLEU score rises to 40.4. In the last two lines, we list the conventional SMT model (Durrani et al., 2014) and the previous best neural models based system Enc-Dec (Luong et al., 2015) for comparison. | 1606.04199#40 | 1606.04199#42 | 1606.04199 | [
"1508.03790"
] |
1606.04199#42 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | We ï¬ nd our best score outperforms the previous best score by nearly 3 points. # 4.5 Analysis # 4.5.1 Length On the English-to-French task, we analyze the effect of the source sentence length on our mod- els as shown in Fig. 3. Here we show ï¬ ve curves: our Deep-Att single model, our Deep-Att ensemble model, our Deep-ED model, a previously proposed Enc-Dec model with four layers (Sutskever et al., 2014) and an SMT model (Durrani et al., 2014). We ï¬ nd our Deep-Att model works better than the 40) fi â Sâ Deep-Att single model i â +â Deep-Att ensemble 3 models 10 â +â Deep-ED single model = + ~Eneâ Dec 4 layers (Sutskever, 2014) = © <SMT (Durrani, 2014) 4 7 8 2 7 2 8 35 79 Sentences by Length Figure 3: BLEU scores vs. source sequence length. Five lines are our Deep-Att single model, Deep-Att ensem- ble model, our Deep-ED model, previous Enc-Dec model with four layers and SMT model. previous two models (Enc-Dec and SMT) on nearly all sentence lengths. It is also shown that for very long sequences with length over 70 words, the per- formance of our Deep-Att does not degrade, when compared to another NMT model Enc-Dec. Our Deep-ED also has much better performance than the shallow Enc-Dec model on nearly all lengths, al- though for long sequences it degrades and starts to fall behind Deep-Att. | 1606.04199#41 | 1606.04199#43 | 1606.04199 | [
"1508.03790"
] |
1606.04199#43 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | # 4.5.2 Unknown words Next we look into the detail of the effect of un- known words on the English-to-French task. We select the subset without unknown words on target sentences from the original test set. There are 1705 such sentences (56.8%). We compute the BLEU scores on this subset and the results are shown in Tab. 9. We also list the results from SMT model (Durrani et al., 2014) as a comparison. | 1606.04199#42 | 1606.04199#44 | 1606.04199 | [
"1508.03790"
] |
1606.04199#44 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | We ï¬ nd that the BLEU score of Deep-Att on this subset rises to 40.3, which has a gap of 2.6 with Model Deep-Att Ensemble SMT(Durrani) Deep-Att Ensemble SMT(Durrani) Test set Ratio(%) BLEU 37.7 100.0 Full 38.9 100.0 Full 37.0 100.0 Full 40.3 56.8 Subset 41.4 56.8 Subset 37.5 56.8 Subset Table 9: BLEU scores of the subset of the test set without considering unknown words. 0.30 028 A. ca Test 0.38 â *â n.=9 n,=7 , â â n.=5n,=3 | 0.40 â s-nsinget 039 036 034 032 0.30 Train Figure 4: Token error rate on train set vs. test set. Square: Deep-Att (ne = 9, nd = 7). Circle: Deep-Att (ne = 5, nd = 3). Triagle: Deep-Att (ne = 1, nd = 1). | 1606.04199#43 | 1606.04199#45 | 1606.04199 | [
"1508.03790"
] |
1606.04199#45 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | the score 37.7 on the full test set. On this sub- set, the SMT model achieves 37.5, which is simi- lar to its score 37.0 on the full test set. This sug- gests that the difï¬ culty on this subset is not much different from that on the full set. We therefore at- tribute the larger gap for Deep-att to the existence of unknown words. We also compute the BLEU score on the subset of the ensemble model and ob- tain 41.4. As a reference related to human perfor- mance, in Sutskever et al. (2014), it has been tested that the BLEU score of oracle re-scoring the LIUM 1000-best results (Schwenk, 2014) is 45. # 4.5.3 Over-ï¬ tting Deep models have more parameters, and thus have a stronger ability to ï¬ t the large data set. However, our experimental results suggest that deep models are less prone to the problem of over-ï¬ tting. In Fig. 4, we show three results from models with a different depth on the English-to-French task. These three models are evaluated by token error rate, which is deï¬ ned as the ratio of incorrectly predicted words in the whole target sequence with correct his- torical input. The curve with square marks corre- sponds to Deep-Att with ne = 9 and nd = 7. The curve with circle marks corresponds to ne = 5 and nd = 3. The curve with triangle marks corresponds to ne = 1 and nd = 1. | 1606.04199#44 | 1606.04199#46 | 1606.04199 | [
"1508.03790"
] |
1606.04199#46 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | We ï¬ nd that the deep model has better performance on the test set when the token error rate is the same as that of the shallow models on the training set. This shows that, with decreased token error rate, the deep model is more advanta- geous in avoiding the over-ï¬ tting phenomenon. We only plot the early training stage curves because, during the late training stage, the curves are not smooth. # 5 Conclusion With the introduction of fast-forward connections to the deep LSTM network, we build a fast path with neither non-linear transformations nor recur- rent computation to propagate the gradients from the top to the deep bottom. On this path, gradients de- cay much slower compared to the standard deep net- work. This enables us to build the deep topology of NMT models. We trained NMT models with depth of 16 in- cluding 25 LSTM layers and evaluated them mainly on the WMTâ | 1606.04199#45 | 1606.04199#47 | 1606.04199 | [
"1508.03790"
] |
1606.04199#47 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | 14 English-to-French translation task. This is the deepest topology that has been in- vestigated in the NMT area on this task. We showed that our Deep-Att exhibits 6.2 BLEU points improvement over the previous best single model, achieving a 37.7 BLEU score. This single end-to- end NMT model outperforms the best conventional SMT system (Durrani et al., 2014) and achieves a state-of-the-art performance. After utilizing un- known word processing and model ensemble of three models, we obtained a BLEU score of 40.4, an improvement of 2.9 BLEU points over the pre- vious best result. When evaluated on the subset of the test corpus without unknown words, our model achieves 41.4. Our model is also validated on the more difï¬ cult English-to-German task. Our model is also efï¬ | 1606.04199#46 | 1606.04199#48 | 1606.04199 | [
"1508.03790"
] |
1606.04199#48 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | cient in sequence genera- tion. The best results from both a single model and model ensemble are obtained with a beam size of 3, much smaller than previous NMT systems where beam size is about 12 (Jean et al., 2015; Sutskever et al., 2014). From our analysis, we ï¬ nd that deep models are more advantageous for learning for long sequences and that the deep topology is resistant to the over-ï¬ tting problem. We tried deeper models and did not obtain further improvements with our current topology and train- ing techniques. However, the depth of 16 is not very deep compared to the models in computer vi- sion (He et al., 2016). | 1606.04199#47 | 1606.04199#49 | 1606.04199 | [
"1508.03790"
] |
1606.04199#49 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | We believe we can beneï¬ t from deeper models, with new designs of topologies and training techniques, which remain as our future work. # References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of In- ternational Conference on Learning Representations. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradi- ent descent is difï¬ | 1606.04199#48 | 1606.04199#50 | 1606.04199 | [
"1508.03790"
] |
1606.04199#50 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | cult. IEEE Transactions on Neural Networks, 5(2):157â 166. Yoshua Bengio, 2012. Practical Recommendations for Gradient-Based Training of Deep Architectures, pages 437â 478. Springer Berlin Heidelberg, Berlin, Heidel- berg. Christian Buck, Kenneth Heaï¬ eld, and Bas van Ooyen. 2014. N-gram counts and language models from the common crawl. In Proceedings of the Language Re- sources and Evaluation Conference. | 1606.04199#49 | 1606.04199#51 | 1606.04199 | [
"1508.03790"
] |
1606.04199#51 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Ben- gio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine transla- tion. In Proceedings of the Empiricial Methods in Nat- ural Language Processing. Nadir Durrani, Barry Haddow, Philipp Koehn, and Ken- neth Heaï¬ eld. 2014. Edinburghâ s phrase-based ma- In Proceed- chine translation systems for WMT-14. ings of the Ninth Workshop on Statistical Machine Translation. | 1606.04199#50 | 1606.04199#52 | 1606.04199 | [
"1508.03790"
] |
1606.04199#52 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Mikel L. Forcada and Ram´on P. Ë Neco. 1997. Recur- In sive hetero-associative memories for translation. Biological and Artiï¬ cial Computation: From Neuro- science to Technology, Berlin, Heidelberg. Springer Berlin Heidelberg. Alex Graves, Marcus Liwicki, Santiago Fernandez, Ro- man Bertolami, Horst Bunke, and J¨urgen Schmid- huber. 2009. | 1606.04199#51 | 1606.04199#53 | 1606.04199 | [
"1508.03790"
] |
1606.04199#53 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | A novel connectionist system for un- IEEE Transac- constrained handwriting recognition. tions on Pattern Analysis and Machine Intelligence, 31(5):855â 868. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In IEEE Conference on Computer Vision and Pattern Recognition. Karl Moritz Hermann, Tom´aË s KoË cisk´y, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. | 1606.04199#52 | 1606.04199#54 | 1606.04199 | [
"1508.03790"
] |
1606.04199#54 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Im- proving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â | 1606.04199#53 | 1606.04199#55 | 1606.04199 | [
"1508.03790"
] |
1606.04199#55 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | 1780. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vo- cabulary for neural machine translation. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the Empirical Methods in Natural Language Processing. Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. 2016. | 1606.04199#54 | 1606.04199#56 | 1606.04199 | [
"1508.03790"
] |
1606.04199#56 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Grid long short-term memory. In Proceedings of International Conference on Learning Representa- tions. Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of International Conference on Learning Representa- tions. P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrase-based translation. In Proceedings of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology. Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to Proceedings of the IEEE, document recognition. 86(11):2278â | 1606.04199#55 | 1606.04199#57 | 1606.04199 | [
"1508.03790"
] |
1606.04199#57 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | 2324. Percy Liang, Ben Taskar, and Dan Klein. 2006. Align- In Proceedings of the North ment by agreement. American Chapter of the Association of Computa- tional Linguistics on Human Language Technology. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Pro- ceedings of the 53rd Annual Meeting of the Associa- tion for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Language Pro- cessing. Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan L. Yuille. 2015. | 1606.04199#56 | 1606.04199#58 | 1606.04199 | [
"1508.03790"
] |
1606.04199#58 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Deep captioning with multimodal recurrent neural networks (m-RNN). In Proceedings of International Conference on Learn- ing Representations. Holger Schwenk. 2014. http://www-lium.univ- ac- lemans.fr/⠼schwenk/cslm joint paper [online; cessed 03-september-2014]. University Le Mans. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. In Proceed- ings of the 32nd International Conference on Machine Learning, Deep Learning Workshop. Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Se- quence to sequence learning with neural networks. In Advances in Neural Information Processing Systems. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Ser- manet, Scott Reed, Dragomir Anguelov, Dumitru Er- han, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In IEEE Con- ference on Computer Vision and Pattern Recognition. Tijmen Tieleman and Geoffrey Hinton. 2012. | 1606.04199#57 | 1606.04199#59 | 1606.04199 | [
"1508.03790"
] |
1606.04199#59 | Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation | Lecture 6.5-rmsprop: Divide the gradient by a running aver- age of its recent magnitude. COURSERA: Neural Net- works for Machine Learning, 4. Oriol Vinyals and Quoc Le. 2015. A neural conver- In Proceedings of the 32nd Interna- sational model. tional Conference on Machine Learning, Deep Learn- ing Workshop. Kaisheng Yao, Trevor Cohn, Katerina Vylomova, Kevin Duh, and Chris Dyer. 2015. Depth-gated LSTM. arXiv:1508.03790. Yang Yu, Wei Zhang, Chung-Wei Hang, Bing Xiang, and Bowen Zhou. 2015. Empirical study on deep learning models for QA. arXiv:1510.07526. Matthew D. Zeiler. 2012. ADADELTA: An adaptive learning rate method. arXiv:1212.5701. Jie Zhou and Wei Xu. 2015. End-to-end learning of se- mantic role labeling using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. | 1606.04199#58 | 1606.04199 | [
"1508.03790"
] |
|
1606.03152#0 | Policy Networks with Two-Stage Training for Dialogue Systems | 6 1 0 2 p e S 2 1 ] L C . s c [ 4 v 2 5 1 3 0 . 6 0 6 1 : v i X r a # Policy Networks with Two-Stage Training for Dialogue Systems # Mehdi Fatemi Layla El Asri Hannes Schulz Jing He Kaheer Suleman # Maluuba Research Le 2000 Peel, Montr´eal, QC H3A 2W5 [email protected] # Abstract | 1606.03152#1 | 1606.03152 | [
"1511.08099"
] |
|
1606.03152#1 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep pol- icy networks which are trained with an advantage actor-critic method for statisti- cally optimised dialogue systems. First, we show that, on summary state and ac- tion spaces, deep Reinforcement Learn- ing (RL) outperforms Gaussian Processes methods. Summary state and action spaces lead to good performance but re- quire pre-engineering effort, RL knowl- edge, and domain expertise. In order to remove the need to deï¬ ne such summary spaces, we show that deep RL can also be trained efï¬ ciently on the original state and action spaces. Dialogue systems based on partially observable Markov decision processes are known to require many di- alogues to train, which makes them un- appealing for practical deployment. We show that a deep RL method based on an actor-critic architecture can exploit a small amount of data very efï¬ | 1606.03152#0 | 1606.03152#2 | 1606.03152 | [
"1511.08099"
] |
1606.03152#2 | Policy Networks with Two-Stage Training for Dialogue Systems | ciently. Indeed, with only a few hundred dialogues col- lected with a handcrafted policy, the actor- critic deep learner is considerably boot- strapped from a combination of supervised and batch RL. In addition, convergence to an optimal policy is signiï¬ cantly sped up compared to other deep RL methods ini- tialized on the data with batch RL. All ex- periments are performed on a restaurant domain derived from the Dialogue State Tracking Challenge 2 (DSTC2) dataset. # Introduction The statistical optimization of dialogue manage- ment in dialogue systems through Reinforcement Learning (RL) has been an active thread of re- search for more than two decades (Levin et al., 1997; Lemon and Pietquin, 2007; Laroche et al., 2010; GaË si´c et al., 2012; Daubigney et al., 2012). Dialogue management has been successfully mod- elled as a Partially Observable Markov Decision Process (POMDP) (Williams and Young, 2007; GaË si´c et al., 2012), which leads to systems that can learn from data and which are robust to noise. In this context, a dialogue between a user and a di- alogue system is framed as a sequential process where, at each turn, the system has to act based on what it has understood so far of the userâ | 1606.03152#1 | 1606.03152#3 | 1606.03152 | [
"1511.08099"
] |
1606.03152#3 | Policy Networks with Two-Stage Training for Dialogue Systems | s utter- ances. Unfortunately, POMDP-based dialogue man- agers have been unï¬ t for online deployment be- cause they typically require several thousands of dialogues for training (GaË si´c et al., 2010, 2012). Nevertheless, recent work has shown that it is pos- sible to train a POMDP-based dialogue system on just a few hundred dialogues corresponding to on- line interactions with users (GaË si´c et al., 2013). However, in order to do so, pre-engineering ef- forts, prior RL knowledge, and domain expertise must be applied. Indeed, summary state and ac- tion spaces must be used and the set of actions must be restricted depending on the current state so that notoriously bad actions are prohibited. In order to alleviate the need for a summary state space, deep RL (Mnih et al., 2013) has recently been applied to dialogue management (Cuay´ahuitl et al., 2015) in the context of negoti- ations. It was shown that deep RL performed sig- niï¬ cantly better than other heuristic or supervised approaches. The authors performed learning over a large action space of 70 actions and they also had to use restricted action sets in order to learn efï¬ ciently over this space. Besides, deep RL was not compared to other RL methods, which we do in this paper. In (Cuay´ahuitl, 2016), a simplistic implementation of deep Q Networks is presented, again with no comparison to other RL methods. In this paper, we propose to efï¬ ciently alleviate the need for summary spaces and restricted actions using deep RL. We analyse four deep RL mod- els: Deep Q Networks (DQN) (Mnih et al., 2013), Double DQN (DDQN) (van Hasselt et al., 2015), Deep Advantage Actor-Critic (DA2C) (Sutton et al., 2000) and a version of DA2C initialized with supervised learning (TDA2C)1 (similar idea to Silver et al. (2016)). All models are trained on a restaurant-seeking domain. | 1606.03152#2 | 1606.03152#4 | 1606.03152 | [
"1511.08099"
] |
1606.03152#4 | Policy Networks with Two-Stage Training for Dialogue Systems | We use the Dialogue State Tracking Challenge 2 (DSTC2) dataset to train an agenda-based user simulator (Schatzmann and Young, 2009) for online learning and to per- form batch RL and supervised learning. We ï¬ rst show that, on summary state and ac- tion spaces, deep RL converges faster than Gaus- sian Processes SARSA (GPSARSA) (GaË si´c et al., 2010). Then we show that deep RL enables us to work on the original state and action spaces. Al- though GPSARSA has also been tried on origi- nal state space (GaË si´c et al., 2012), it is extremely slow in terms of wall-clock time due to its grow- ing kernel evaluations. Indeed, contrary to meth- ods such as GPSARSA, deep RL performs efï¬ - cient generalization over the state space and mem- ory requirements do not increase with the num- ber of experiments. On the simple domain speci- ï¬ ed by DSTC2, we do not need to restrict the ac- tions in order to learn efï¬ ciently. In order to re- move the need for restricted actions in more com- plex domains, we advocate for the use of TDA2C and supervised learning as a pre-training step. We show that supervised learning on a small set of dialogues (only 706 dialogues) signiï¬ cantly boot- straps TDA2C and enables us to start learning with a policy that already selects only valid ac- tions, which makes for a safe user experience in deployment. Therefore, we conclude that TDA2C is very appealing for the practical deployment of POMDP-based dialogue systems. In Section 2 we brieï¬ y review POMDP, RL and GPSARSA. The value-based deep RL models in- vestigated in this paper (DQN and DDQN) are de- scribed in Section 3. Policy networks and DA2C are discussed in Section 4. We then introduce the two-stage training of DA2C in Section 5. Experi- mental results are presented in Section 6. Finally, Section 7 concludes the paper and makes sugges- tions for future research. | 1606.03152#3 | 1606.03152#5 | 1606.03152 | [
"1511.08099"
] |
1606.03152#5 | Policy Networks with Two-Stage Training for Dialogue Systems | 1Teacher DA2C # 2 Preliminaries The reinforcement learning problem consists of an environment (the user) and an agent (the system) (Sutton and Barto} {1998). The environment is de- scribed as a set of continuous or discrete states S and at each state s â ¬ S, the system can perform an action from an action space A(s). The actions can be continuous, but in our case they are assumed to be discrete and finite. At time t, as a consequence of an action A; = a â ¬ A(s), the state transitions from S; = s to Si4, = sâ â ¬ S. In addition, a reward signal Ri+1 = R(S;, At, Si41) â ¬ R pro- vides feedback on the quality of the transitior?| The agentâ s task is to maximize at each state the expected discounted sum of rewards received after visiting this state. For this purpose, value func- tions are computed. The action-state value func- tion Q is defined as: Q" (St, At) = [Rist + Rive +P Ru3t..- | Si = s, At =al, (1) where γ is a discount factor in [0, 1]. In this equa- tion, the policy Ï speciï¬ es the systemâ s behaviour, i.e., it describes the agentâ s action selection pro- cess at each state. A policy can be a deterministic mapping Ï (s) = a, which speciï¬ es the action a to be selected when state s is met. On the other hand, a stochastic policy provides a probability distribu- tion over the action space at each state: Ï (a|s) = P[At = a|St = s]. The agentâ s goal is to ï¬ nd a policy that maximizes the Q-function at each state. It is important to note that here the system does not have direct access to the state s. Instead, it sees this state through a perception process which typically includes an Automatic Speech Recogni- tion (ASR) step, a Natural Language Understand- ing (NLU) step, and a State Tracking (ST) step. | 1606.03152#4 | 1606.03152#6 | 1606.03152 | [
"1511.08099"
] |
1606.03152#6 | Policy Networks with Two-Stage Training for Dialogue Systems | This perception process injects noise in the state of the system and it has been shown that mod- elling dialogue management as a POMDP helps to overcome this noise (Williams and Young, 2007; Young et al., 2013). Within the POMDP framework, the state at time t, St, is not directly observable. Instead, the sys- tem has access to a noisy observation Ot.3 A 2In this paper, upper-case letters are used for random vari- ables, lower-case letters for non-random values (known or unknown), and calligraphy letters for sets. 3Here, the representation of the userâ s goal and the userâ s utterances. POMDP is a tuple (S,.A, P, R,O, Z,7, bo) where S is the state space, A is the action space, P is the function encoding the transition probability: P,(s, 8â ) = P(Si41 = 8â | Sp = 5, Ay = a), Ris the reward function, O is the observation space, Z encodes the observation probabilities Z,(s,0) = P(Q, = 0 | S; = s, Ay = a), 7 is a discount fac- tor, and bo is an initial belief state. The belief state is a distribution over states. Starting from bo, the state tracker maintains and updates the belief state according to the observations perceived during the dialogue. The dialogue manager then operates on this belief state. Consequently, the value functions as well as the policy of the agent are computed on the belief states B;: Q" (Bi, At) = x |S Regs | Br, At U>t [At = a|Bi = bj]. (3) m(alb) = | 1606.03152#5 | 1606.03152#7 | 1606.03152 | [
"1511.08099"
] |
1606.03152#7 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we use GPSARSA as a baseline as it has been proved to be a successful algorithm for training POMDP-based dialogue managers (Engel et al., 2005; GaË si´c et al., 2010). Formally, the Q- function is modelled as a Gaussian process, en- tirely deï¬ ned by a mean and a kernel: Q(B, A) â ¼ GP(m, (k(B, A), k(B, A))). | 1606.03152#6 | 1606.03152#8 | 1606.03152 | [
"1511.08099"
] |
1606.03152#8 | Policy Networks with Two-Stage Training for Dialogue Systems | The mean is usually initialized at 0 and it is then jointly updated with the covariance based on the systemâ s observations (i.e., the visited belief states and actions, and the In order to avoid intractability in the rewards). number of experiments, we use kernel span spar- siï¬ cation (Engel et al., 2005). This technique con- sists of approximating the kernel on a dictionary of linearly independent belief states. This dictio- nary is incrementally built during learning. Kernel span sparsiï¬ cation requires setting a threshold on the precision to which the kernel is computed. As discussed in Section 6, this threshold needs to be ï¬ ne-tuned for a good tradeoff between precision and performance. | 1606.03152#7 | 1606.03152#9 | 1606.03152 | [
"1511.08099"
] |
1606.03152#9 | Policy Networks with Two-Stage Training for Dialogue Systems | # 3 Value-Based Deep Reinforcement Learning Broadly speaking, there are two main streams of methodologies in the RL literature: value approxi- mation and policy gradients. As suggested by their names, the former tries to approximate the value function whereas the latter tries to directly approx- imate the policy. Approximations are necessary for large or continuous belief and action spaces. Indeed, if the belief space is large or continuous it would not be possible to store a value for each state in a table, so generalization over the state space is necessary. | 1606.03152#8 | 1606.03152#10 | 1606.03152 | [
"1511.08099"
] |
1606.03152#10 | Policy Networks with Two-Stage Training for Dialogue Systems | In this context, some of the beneï¬ ts of deep RL techniques are the following: â ¢ Generalisation over the belief space is efï¬ - cient and the need for summary spaces is eliminated, normally with considerably less wall-clock training time comparing to GP- SARSA, for example. â ¢ Memory requirements are limited and can be determined in advance unlike with methods such as GPSARSA. â ¢ Deep architectures with several hidden layers can be efï¬ ciently used for complex tasks and environments. # 3.1 Deep Q Networks A Deep Q-Network (DQN) is a multi-layer neu- ral network which maps a belief state Bt to the values of the possible actions At â A(Bt = b) at that state, QÏ (Bt, At; wt), where wt is the weight vector of the neural network. Neural net- works for the approximation of value functions have long been investigated (Bertsekas and Tsit- siklis, 1996). However, these methods were previ- ously quite unstable (Mnih et al., 2013). In DQN, Mnih et al. (2013, 2015) proposed two techniques to overcome this instability-namely experience re- play and the use of a target network. In experi- ence replay, all the transitions are put in a ï¬ nite pool D (Lin, 1993). Once the pool has reached its predeï¬ ned maximum size, adding a new tran- sition results in deleting the oldest transition in the pool. During training, a mini-batch of tran- sitions is uniformly sampled from the pool, i.e. (Bt, At, Rt+1, Bt+1) â | 1606.03152#9 | 1606.03152#11 | 1606.03152 | [
"1511.08099"
] |
1606.03152#11 | Policy Networks with Two-Stage Training for Dialogue Systems | ¼ U (D). This method re- moves the instability arising from strong corre- lation between the subsequent transitions of an episode (a dialogue). Additionally, a target net- work with weight vector wâ is used. This target network is similar to the Q-network except that its weights are only copied every Ï steps from the Q-network, and remain ï¬ xed during all the other steps. The loss function for the Q-network at iter- ation t takes the following form: Li(we) = EC, Ae, Rey1,Bey1)~U(D) [ (Re + ymax Q"(Br41,4'; wy, ) a 2 â Q" (Bi, Aes w:)) | : (4) # 3.2 Double DQN: | 1606.03152#10 | 1606.03152#12 | 1606.03152 | [
"1511.08099"
] |
1606.03152#12 | Policy Networks with Two-Stage Training for Dialogue Systems | Overcoming # Overestimation and Instability of DQN The max operator in Equation 4 uses the same value network (i.e., the target network) to se- lect actions and evaluate them. This increases the probability of overestimating the value of the state-action pairs (van Hasselt, 2010; van Hasselt et al., 2015). To see this more clearly, the target part of the loss in Equation 4 can be rewritten as follows: Rt+1 + γQÏ (Bt+1, argmax a QÏ (Bt+1, a; wâ t ); wâ t ). In this equation, the target network is used twice. Decoupling is possible by using the Q-network for action selection as follows (van Hasselt et al., 2015): Rt+1 + γQÏ (Bt+1, argmax a QÏ (Bt+1, a; wt); wâ t ). Then, similarly to DQN, the Q-network is trained using experience replay and the target network is updated every Ï steps. This new version of DQN, called Double DQN (DDQN), uses the two value networks in a decoupled manner, and alleviates the overestimation issue of DQN. This generally re- sults in a more stable learning process (van Hasselt et al., 2015). In the following section, we present deep RL models which perform policy search and output a stochastic policy rather than value approximation with a deterministic policy. # 4 Policy Networks and Deep Advantage Actor-Critic (DA2C) A policy network is a parametrized probabilistic mapping between belief and action spaces: | 1606.03152#11 | 1606.03152#13 | 1606.03152 | [
"1511.08099"
] |
1606.03152#13 | Policy Networks with Two-Stage Training for Dialogue Systems | Ï Î¸(a|b) = Ï (a|b; θ) = P(At = a|Bt = b, θt = θ), where θ is the parameter vector (the weight vec- tor of a neural network).4 In order to train policy 4For parametrization, we use w for value networks and θ for policy networks. networks, policy gradient algorithms have been developed (Williams, 1992; Sutton et al., 2000). Policy gradient algorithms are model-free meth- ods which directly approximate the policy by parametrizing it. The parameters are learnt using a gradient-based optimization method. | 1606.03152#12 | 1606.03152#14 | 1606.03152 | [
"1511.08099"
] |
1606.03152#14 | Policy Networks with Two-Stage Training for Dialogue Systems | We ï¬ rst need to deï¬ ne an objective function J that will lead the search for the parameters θ. This objective function deï¬ nes policy quality. One way of deï¬ ning it is to take the average over the re- wards received by the agent. Another way is to compute the discounted sum of rewards for each trajectory, given that there is a designated start state. The policy gradient is then computed ac- cording to the Policy Gradient Theorem (Sutton et al., 2000). Theorem 1 (Policy Gradient) For any differen- tiable policy Ï Î¸(b, a) and for the average reward or the start-state objective function, the policy gradient can be computed as | 1606.03152#13 | 1606.03152#15 | 1606.03152 | [
"1511.08099"
] |
1606.03152#15 | Policy Networks with Two-Stage Training for Dialogue Systems | â θJ(θ) = EÏ Î¸ [â θ log Ï Î¸(a|b)QÏ Î¸ (b, a)]. Policy gradient methods have been used success- fully in different domains. Two recent examples are AlphaGo by DeepMind (Silver et al., 2016) and MazeBase by Facebook AI (Sukhbaatar et al., 2016). One way to exploit Theorem 1 is to parametrize QÏ Î¸ (b, a) separately (with a parameter vector w) and learn the parameter vector during training in a similar way as in DQN. The trained Q-network can then be used for policy evaluation in Equa- tion 5. Such algorithms are known in general as actor-critic algorithms, where the Q approximator is the critic and Ï Î¸ is the actor (Sutton, 1984; Barto et al., 1990; Bhatnagar et al., 2009). This can be achieved with two separate deep neural networks: a Q-Network and a policy network. However, a direct use of Equation 5 with Q as critic is known to cause high variance (Williams, 1992). An important property of Equation 5 can be used in order to overcome this issue: subtract- ing any differentiable function Ba expressed over the belief space from QÏ Î¸ will not change the gra- dient. A good selection of Ba, which is called the baseline, can reduce the variance dramatically (Sutton and Barto, 1998). As a result, Equation 5 may be rewritten as follows: | 1606.03152#14 | 1606.03152#16 | 1606.03152 | [
"1511.08099"
] |
1606.03152#16 | Policy Networks with Two-Stage Training for Dialogue Systems | â θJ(θ) = EÏ Î¸ [â θ log Ï Î¸(a|b)Ad(b, a)], (6) where Ad(b, a) = Q7¢(b, a) â Ba(b) is called the advantage function. A good baseline is the value function Vâ ¢â , for which the advantage function becomes Ad(b,a) = Q7°(b,a) â V(b). How- ever, in this setting, we need to train two sepa- rate networks to parametrize Qâ ° and Vâ °. A bet- ter approach is to use the TD error 6 = Ri41 + V7 (Bi+1) â V7( By) as advantage function. It can be proved that the expected value of the TD error is Q⠢¢(b,a) â V79(b). If the TD error is used, only one network is needed, to parametrize V7(B,) = Vâ ¢ (Bi; wz). We call this network the value network. We can use a DQN-like method to train the value network using both experience re- play and a target network. | 1606.03152#15 | 1606.03152#17 | 1606.03152 | [
"1511.08099"
] |
1606.03152#17 | Policy Networks with Two-Stage Training for Dialogue Systems | For a transition B, = b, A, = a, Riz, = r and By+1 = 0bâ , the advantage function is calculated as in: bp =7 + V(b; w,) â V7(d; we). (7) Because the gradient in Equation 6 is weighted by the advantage function, it may become quite large. In fact, the advantage function may act as a large learning rate. This can cause the learning process to become unstable. To avoid this issue, we add L2 regularization to the policy objective function. We call this method Deep Advantage Actor-Critic (DA2C). | 1606.03152#16 | 1606.03152#18 | 1606.03152 | [
"1511.08099"
] |
1606.03152#18 | Policy Networks with Two-Stage Training for Dialogue Systems | In the next section, we show how this architec- ture can be used to efï¬ ciently exploit a small set of handcrafted data. # 5 Two-stage Training of the Policy Network By deï¬ nition, the policy network provides a prob- ability distribution over the action space. As a re- sult and in contrast to value-based methods such as DQN, a policy network can also be trained with direct supervised learning (Silver et al., 2016). Supervised training of RL agents has been well- studied in the context of Imitation Learning (IL). In IL, an agent learns to reproduce the behaviour of an expert. Supervised learning of the policy was one of the ï¬ rst techniques used to solve this prob- lem (Pomerleau, 1989; Amit and Mataric, 2002). This direct type of imitation learning requires that the learning agent and the expert share the same characteristics. If this condition is not met, IL can be done at the level of the value functions rather than the policy directly (Piot et al., 2015). In this paper, the data that we use (DSTC2) was collected with a dialogue system similar to the one we train so in our case, the demonstrator and the learner share the same characteristics. Similarly to Silver et al. (2016), here, we ini- tialize both the policy network and the value net- work on the data. The policy network is trained by minimising the categorical cross-entropy between the predicted action distribution and the demon- strated actions. The value network is trained di- rectly through RL rather than IL to give more ï¬ | 1606.03152#17 | 1606.03152#19 | 1606.03152 | [
"1511.08099"
] |
1606.03152#19 | Policy Networks with Two-Stage Training for Dialogue Systems | ex- ibility in the kind of data we can use. Indeed, our goal is to collect a small number of dialogues and learn from them. IL usually assumes that the data corresponds to expert policies. However, di- alogues collected with a handcrafted policy or in a Wizard-of-Oz (WoZ) setting often contain both optimal and sub-optimal dialogues and RL can be used to learn from all of these dialogues. Super- vised training can also be done on these dialogues as we show in Section 6. Supervised actor-critic architectures following this idea have been proposed in the past (Ben- brahim and Franklin, 1997; Si et al., 2004); the actor works together with a human supervisor to gain competence on its task even if the criticâ | 1606.03152#18 | 1606.03152#20 | 1606.03152 | [
"1511.08099"
] |
1606.03152#20 | Policy Networks with Two-Stage Training for Dialogue Systems | s es- timations are poor. For instance, a human can help a robot move by providing the robot with valid ac- tions. We advocate for the same kind of methods for dialogue systems. It is easy to collect a small number of high-quality dialogues and then use su- pervised learning on this data to teach the system valid actions. This also eliminates the need to de- ï¬ ne restricted action sets. In all the methods above, Adadelta will be used as the gradient-decent optimiser, which in our experiments works noticeably better than other methods such as Adagrad, Adam, and RMSProp. # 6 Experiments # 6.1 Comparison of DQN and GPSARSA 6.1.1 Experimental Protocol In this section, as a ï¬ rst argument in favour of deep RL, we perform a comparison between GPSARSA and DQN on simulated dialogues. We trained an agenda-based user simulator which at each dia- logue turn, provides one or several dialogue act(s) in response to the latest machine act (Schatzmann et al., 2007; Schatzmann and Young, 2009). The dataset used for training this user-simulator is the Dialogue State Tracking Challenge 2 (DSTC2) (Henderson et al., 2014) dataset. State tracking is also trained on this dataset. DSTC2 includes | 1606.03152#19 | 1606.03152#21 | 1606.03152 | [
"1511.08099"
] |
1606.03152#21 | Policy Networks with Two-Stage Training for Dialogue Systems | â pan â GPSARSA â â DAQN-no-summary Average dialogue length 0 5 10 15 1 = _ 2) eS = OLF J ~ 2 o D> g-1 g <= -2 0 5 10 15 x1000 training dialogues & 20 f= â Dan 5 â â ppan ® 15 â â pa2zc = ist 10 os o D 5 gs g zo oO 5 10 15 1 a <j Yo 2 o D g-1 g <x -2 oO 5 10 15 x1000 training dialogues (a) Comparison of GPSARSA on summary spaces and DQN on summary (DQN) and original spaces (DQN-no- summary). (b) Comparison of DA2C, DQN and DDQN on original spaces. | 1606.03152#20 | 1606.03152#22 | 1606.03152 | [
"1511.08099"
] |
1606.03152#22 | Policy Networks with Two-Stage Training for Dialogue Systems | Figure 1: Comparison of different algorithms on simulated dialogues, without any pre-training. dialogues with users who are searching for restau- rants in Cambridge, UK. In each dialogue, the user has a goal containing constraint slots and request slots. The constraint and request slots available in DSTC2 are listed in Appendix A. The constraints are the slots that the user has to provide to the system (for instance the user is looking for a speciï¬ c type of food in a given area) and the requests are the slots that the user must receive from the system (for instance the user wants to know the address and phone number of the restaurant found by the system). Similarly, the belief state is composed of two parts: constraints and requests. The constraint part includes the probabilities of the top two values for each constraint slot as returned by the state tracker (the value might be empty with a probability zero if the slot has not been mentioned). The request part, on the other hand, includes the probability of each request slot. For instance the constraint part might be [food: (Italian, 0.85) (Indian, 0.1) (Not mentioned, 0.05)] and the request part might be [area: 0.95] meaning that the user is probably looking for an Italian restaurant and that he wants to know the area of the restaurant found by the sys- tem. To compare DQN to GPSARSA, we work on a summary state space (GaË si´c et al., 2012, 2013). Each constraint is mapped to a one-hot vector, with 1 corresponding to the tuple in the grid vec- tor gc = [(1, 0), (.8, .2), (.6, .2), (.6, .4), (.4, .4)] that minimizes the Euclidean distance to the top two probabilities. Similarly, each request slot is mapped to a one-hot vector according to the grid gr = [1, .8, .6, .4, 0.]. | 1606.03152#21 | 1606.03152#23 | 1606.03152 | [
"1511.08099"
] |
1606.03152#23 | Policy Networks with Two-Stage Training for Dialogue Systems | The ï¬ nal belief vector, known as the summary state, is deï¬ ned as the con- catenation of the constraint and request one-hot vectors. Each summary state is a binary vector of length 60 (12 one-hot vectors of length 5) and the total number of states is 512. We also work on a summary action space and we use the act types listed in Table 1 in Appendix A. We add the necessary slot information as a post processing step. For example, the request act means that the system wants to request a slot from the user, e.g. request(food). In this case, the se- lection of the slot is based on min-max probabil- ity, i.e., the most ambiguous slot (which is the slot we want to request) is assumed to be the one for which the value with maximum probability has the minimum probability compared to the most cer- tain values of the other slots. Note that this heuris- tic approach to compute the summary state and ac- tion spaces is a requirement to make GPSARSA tractable; it is a serious limitation in general and should be avoided. As reward, we use a normalized scheme with a reward of +1 if the dialogue ï¬ | 1606.03152#22 | 1606.03152#24 | 1606.03152 | [
"1511.08099"
] |
1606.03152#24 | Policy Networks with Two-Stage Training for Dialogue Systems | nishes successfully before 30 turns,5 a reward of -1 if the dialogue is not successful after 30 turns, and a reward of -0.03 for each turn. A reward of -1 is also distributed to the system if the user hangs up. In our settings, the user simulator hangs up every time the system pro- poses a restaurant which does not match at least one of his constraints. For the deep @-network, a Multi-Layer Percep- tron (MLP) is used with two fully connected hid- den layers, each having a tanh activation. The output layer has no activation and it provides the value for each of the summary machine acts. The summary machine acts are mapped to orig- inal acts using the heuristics explained previ- ously. | 1606.03152#23 | 1606.03152#25 | 1606.03152 | [
"1511.08099"
] |
1606.03152#25 | Policy Networks with Two-Stage Training for Dialogue Systems | Both algorithms are trained with 15000 dialogues. GPSARSA is trained with eâ ¬-softmax exploration, which, with probability 1 â â ¬, se- lects an action based on the logistic distribution Q(b,a) Plalb] = F eahay Oba) lects an action in a uniformly random way. From our experiments, this exploration scheme works best in terms of both convergence rate and vari- ance. For DQN, we use a simple e-greedy explo- ration which, with probability â | 1606.03152#24 | 1606.03152#26 | 1606.03152 | [
"1511.08099"
] |
1606.03152#26 | Policy Networks with Two-Stage Training for Dialogue Systems | ¬ (same â ¬ as above), uniformly selects an action and, with probability 1â e, selects an action maximizing the Q-function. For both algorithms, ¢ is annealed to less than 0.1 over the course of training. and, with probability â ¬, se- In a second experiment, we remove both summary state and action spaces for DQN, i.e., we do not perform the Euclidean-distance map- ping as before but instead work directly on the probabilities themselves. Additionally, the state is augmented with the probability (returned by the state tracker) of each user act (see Table 2 in Appendix A), the dialogue turn, and the number of results returned by the database (0 if there was no query). Consequently, the state consists of 31 continuous values and two discrete values. The original action space is composed of 11 actions: offer6, select-food, request-area, select-pricerange, request-pricerange, request-food, expl-conf-area, expl-conf-food, expl-conf-pricerange, repeat. | 1606.03152#25 | 1606.03152#27 | 1606.03152 | [
"1511.08099"
] |
1606.03152#27 | Policy Networks with Two-Stage Training for Dialogue Systems | There 5A dialogue is successful if the user retrieves all the re- quest slots for a restaurant matching all the constraints of his goal. 6This act consists of proposing a restaurant to the user. In order to be consistent with the DSTC2 dataset, an offer al- ways contains the values for all the constraints understood by the system, e.g. offer(name = Super Ramen, food = Japanese, price range = cheap). is no post-processing via min-max selection anymore since the slot is part of the action, e.g., select-area. The policies are evaluated after each 1000 train- ing dialogues on 500 test dialogues without explo- ration. 6.1.2 Results Figure 1 illustrates the performance of DQN com- pared to GPSARSA. In our experiments with GP- SARSA we found that it was difï¬ | 1606.03152#26 | 1606.03152#28 | 1606.03152 | [
"1511.08099"
] |
1606.03152#28 | Policy Networks with Two-Stage Training for Dialogue Systems | cult to ï¬ nd a good tradeoff between precision and efï¬ ciency. Indeed, for low precision, the algorithm learned rapidly but did not reach optimal behaviour, whereas higher precision made learning extremely slow but resulted in better end-performance. On summary spaces, DQN outperforms GPSARSA in terms of convergence. Indeed, GPSARSA re- quires twice as many dialogues to converge. It is also worth mentioning here that the wall-clock training time of GPSARSA is considerably longer than the one of DQN due to kernel evaluation. The second experiment validates the fact that Deep RL can be efï¬ ciently trained directly on the belief state returned by the state tracker. Indeed, DQN on the original spaces performs as well as GPSARSA on the summary spaces. In the next section, we train and compare the deep RL networks previously described on the original state and action spaces. # 6.2 Comparison of the Deep RL Methods 6.2.1 Experimental Protocol Similarly to the previous example, we work on a restaurant domain and use the DSTC2 speci- fications. | 1606.03152#27 | 1606.03152#29 | 1606.03152 | [
"1511.08099"
] |
1606.03152#29 | Policy Networks with Two-Stage Training for Dialogue Systems | We use eâ greedy exploration for all four algorithms with eâ ¬ starting at 0.5 and be- ing linearly annealed at a rate of A = 0.99995. To speed up the learning process, the actions select-pricerange, select-area, and select-food are excluded from exploration. Note that this set does not depend on the state and is meant for exploration only. All the actions can be performed by the system at any moment. We derived two datasets from DSTC2. | 1606.03152#28 | 1606.03152#30 | 1606.03152 | [
"1511.08099"
] |
1606.03152#30 | Policy Networks with Two-Stage Training for Dialogue Systems | The ï¬ rst dataset contains the 2118 dialogues of DSTC2. We had these dialogues rated by a human expert, based on the quality of dialogue management and on a scale of 0 to 3. The second dataset only con- tains the dialogues with a rating of 3 (706 dia- logues). The underlying assumption is that these dialogues correspond to optimal policies. â DDAN + Batch â DON + Batch 15 â â DA2C + Batch Average dialogue length ° Average rewards -2 oO 5 10 15 x1000 training dialogues â SupExptBatchDA2c â â SupFullBatchDA2c â BatchDA2c â â DA2c Average dialogue length Average rewards 0 5 10 15 x1000 training dialogues (a) Comparison of DA2C, DQN and DDQN after batch ini- tialization. (b) Comparison of DA2C and DA2C after batch initializa- tion (batchDA2C), and TDA2C after supervised training on expert (SupExptBatchDA2C) and non-expert data (SupFull- BatchDA2C). Figure 2: Comparison of different algorithms on simulated dialogues, with pre-training. We compare the convergence rates of the deep RL models in different settings. First, we com- pare DQN, DDQN and DA2C without any pre- training (Figure 1b). Then, we compare DQN, DDQN and TDA2C with an RL initialization on the DSTC2 dataset (Figure 2a). Finally, we focus on the advantage actor-critic models and compare DA2C, TDA2C, TDA2C with batch initialization on DSTC2, and TDA2C with batch initialization on the expert dialogues (Figure 2b). of the dialogue acts chosen by the system were still appropriate, which explains that the system learns acceptable behavior from the entire dataset. This shows that supervised training, even when performed not only on optimal dialogues, makes learning much faster and relieves the need for re- stricted action sets. Valid actions are learnt from the dialogues and then RL exploits the good and bad dialogues to pursue training towards a high performing policy. | 1606.03152#29 | 1606.03152#31 | 1606.03152 | [
"1511.08099"
] |
1606.03152#31 | Policy Networks with Two-Stage Training for Dialogue Systems | # 6.2.2 Results # 7 Concluding Remarks As expected, DDQN converges faster than DQN on all experiments. Figure 1b shows that, with- out any pre-training, DA2C is the one which con- verges the fastest (6000 dialogues vs. 10000 dia- logues for the other models). Figure 2a gives con- sistent results and shows that, with initial train- ing on the 2118 dialogues of DSTC2, TDA2C converges signiï¬ cantly faster than the other mod- els. Figure 2b focuses on DA2C and TDA2C. Compared to batch training, supervised training on DSTC2 speeds up convergence by 2000 dia- logues (3000 dialogues vs. 5000 dialogues). In- terestingly, there does not seem to be much dif- ference between supervised training on the expert data and on DSTC2. The expert data only con- sists of 706 dialogues out of 2118 dialogues. Our observation is that, in the non-expert data, many In this paper, we used policy networks for dia- logue systems and trained them in a two-stage fashion: supervised training and batch reinforce- ment learning followed by online reinforcement learning. An important feature of policy networks is that they directly provide a probability distribu- tion over the action space, which enables super- vised training. We compared the results with other deep reinforcement learning algorithms, namely Deep Q Networks and Double Deep Q Networks. The combination of supervised and reinforcement learning is the main beneï¬ t of our method, which paves the way for developing trainable end-to-end dialogue systems. Supervised training on a small dataset considerably bootstraps the learning pro- cess and can be used to signiï¬ cantly improve the convergence rate of reinforcement learning in sta- tistically optimised dialogue systems. | 1606.03152#30 | 1606.03152#32 | 1606.03152 | [
"1511.08099"
] |
1606.03152#32 | Policy Networks with Two-Stage Training for Dialogue Systems | # References R. Amit and M. Mataric. 2002. Learning move- In Proc. ment sequences from demonstration. Int. Conf. on Development and Learning. pages 203â 208. A. G. Barto, R. S. Sutton, and C. W. Anderson. In Artiï¬ cial Neural Networks, chapter 1990. Neuronlike Adaptive Elements That Can Solve Difï¬ cult Learning Control Problems, pages 81â 93. H. Benbrahim and J. A. Franklin. 1997. | 1606.03152#31 | 1606.03152#33 | 1606.03152 | [
"1511.08099"
] |
1606.03152#33 | Policy Networks with Two-Stage Training for Dialogue Systems | Biped dynamic walking using reinforcement learning. Robotics and Autonomous Systems 22:283â 302. D. P. Bertsekas and J. Tsitsiklis. 1996. Neuro- Dynamic Programming. Athena Scientiï¬ c. S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and M. Lee. 2009. Natural Actor-Critic Algorithms. Automatica 45(11). Simpleds: A simple deep reinforcement learning dialogue system. arXiv:1601.04574v1 [cs.AI]. H. Cuay´ahuitl, S. Keizer, and O. | 1606.03152#32 | 1606.03152#34 | 1606.03152 | [
"1511.08099"
] |
1606.03152#34 | Policy Networks with Two-Stage Training for Dialogue Systems | Lemon. 2015. Strategic dialogue management via deep rein- forcement learning. arXiv:1511.08099 [cs.AI]. L. Daubigney, M. Geist, S. Chandramohan, and O. Pietquin. 2012. A Comprehensive Rein- forcement Learning Framework for Dialogue IEEE Journal of Management Optimisation. Selected Topics in Signal Processing 6(8):891â 902. Y. Engel, S. Mannor, and R. Meir. 2005. Rein- forcement learning with gaussian processes. In Proc. of ICML. | 1606.03152#33 | 1606.03152#35 | 1606.03152 | [
"1511.08099"
] |
1606.03152#35 | Policy Networks with Two-Stage Training for Dialogue Systems | M. GaË si´c, C. Breslin, M. Henderson, D. Kim, M. Szummer, B. Thomson, P. Tsiakoulis, and S.J. Young. 2013. On-line policy optimisation of bayesian spoken dialogue systems via human In Proc. of ICASSP. pages 8367â interaction. 8371. M. GaË si´c, M. Henderson, B. Thomson, P. Tsiak- oulis, and S. | 1606.03152#34 | 1606.03152#36 | 1606.03152 | [
"1511.08099"
] |
1606.03152#36 | Policy Networks with Two-Stage Training for Dialogue Systems | Young. 2012. Policy optimisa- tion of POMDP-based dialogue systems with- out state space compression. In Proc. of SLT. M. GaË si´c, F. JurË cÂ´Ä±Ë cek, S. Keizer, F. Mairesse, B. Thomson, K. Yu, and S. Young. 2010. Gaus- sian processes for fast policy optimisation of POMDP-based dialogue managers. In Proc. of SIGDIAL. M. Henderson, B. Thomson, and J. Williams. 2014. | 1606.03152#35 | 1606.03152#37 | 1606.03152 | [
"1511.08099"
] |
1606.03152#37 | Policy Networks with Two-Stage Training for Dialogue Systems | The Second Dialog State Tracking Chal- lenge. In Proc. of SIGDIAL. R. Laroche, G. Putois, and P. Bretier. 2010. Op- timising a handcrafted dialogue system design. In Proc. of Interspeech. O. Lemon and O. Pietquin. 2007. Machine learn- In Proc. of ing for spoken dialogue systems. Interspeech. pages 2685â 2688. E. Levin, R. Pieraccini, and W. Eckert. 1997. | 1606.03152#36 | 1606.03152#38 | 1606.03152 | [
"1511.08099"
] |
1606.03152#38 | Policy Networks with Two-Stage Training for Dialogue Systems | Learning dialogue strategies within the markov decision process framework. In Proc. of ASRU. L-J Lin. 1993. Reinforcement learning for robots using neural networks. Ph.D. thesis, Carnegie Mellon University. V Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I Antonoglou, D. Wierstra, and M. Riedmiller. 2013. Playing Atari with deep reinforcement learning. In NIPS Deep Learning Workshop. | 1606.03152#37 | 1606.03152#39 | 1606.03152 | [
"1511.08099"
] |
1606.03152#39 | Policy Networks with Two-Stage Training for Dialogue Systems | V. Mnih, K. Kavukcuoglu, D. Silver, A.A. Rusu, J. Veness, M.G. Bellemare, A. Graves, M. Ried- miller, A.K. Fidjeland, G. Ostrovski, S. Pe- tersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. 2015. Human-level control through deep reinforcement learning. Nature 518(7540):529â 533. B. Piot, M. Geist, and O. Pietquin. 2015. | 1606.03152#38 | 1606.03152#40 | 1606.03152 | [
"1511.08099"
] |
1606.03152#40 | Policy Networks with Two-Stage Training for Dialogue Systems | Imitation Learning Applied to Embodied Conversational Agents. In Proc. of MLIS. D. A. Pomerleau. 1989. Alvinn: An autonomous In Proc. of land vehicle in a neural network. NIPS. pages 305â 313. J. Schatzmann, B. Thomson, K. Weilhammer, H. Ye, and S. Young. 2007. Agenda-based user simulation for bootstrapping a POMDP di- alogue system. In Proc. of NAACL HLT. pages 149â 152. J. Schatzmann and S. Young. 2009. The hidden agenda user simulation model. Proc. of TASLP 17(4):733â 747. J. Si, A. G. Barto, W. B. Powell, and D. Wun- sch. 2004. Supervised ActorCritic Reinforce- ment Learning, pages 359â 380. D. Silver, A. Huang, C.J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalch- brenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hass- abis. 2016. | 1606.03152#39 | 1606.03152#41 | 1606.03152 | [
"1511.08099"
] |
1606.03152#41 | Policy Networks with Two-Stage Training for Dialogue Systems | Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484â 489. S. Sukhbaatar, A. Szlam, G. Synnaeve, S. Chintala, and R. Fergus. 2016. Maze- base: A sandbox for learning from games. arxiv.org/pdf/1511.07401 [cs.LG]. R. S. Sutton. 1984. Temporal credit assignment in reinforcement learning. | 1606.03152#40 | 1606.03152#42 | 1606.03152 | [
"1511.08099"
] |
1606.03152#42 | Policy Networks with Two-Stage Training for Dialogue Systems | Ph.D. thesis, Uni- versity of Massachusetts at Amherst, Amherst, MA, USA. R. S. Sutton, D. McAllester, S. Singh, and Y. Man- sour. 2000. Policy gradient methods for re- inforcement learning with function approxima- tion. In Proc. of NIPS. volume 12, pages 1057â 1063. R.S. Sutton and A.G. Barto. 1998. | 1606.03152#41 | 1606.03152#43 | 1606.03152 | [
"1511.08099"
] |
1606.03152#43 | Policy Networks with Two-Stage Training for Dialogue Systems | Reinforcement Learning. MIT Press. H. van Hasselt. 2010. Double q-learning. In Proc. of NIPS. pages 2613â 2621. H. van Hasselt, A. Guez, and D. Silver. 2015. Deep reinforcement learning with double Q- learning. arXiv:1509.06461v3 [cs.LG]. J.D. Williams and S. Young. 2007. Partially ob- servable markov decision processes for spoken dialog systems. Proc. of CSL 21:231â 422. R.J. Williams. 1992. Simple statistical gradient- following algorithms for connectionist rein- forcement learning. Machine Learning 8:229â 256. S. Young, M. Gasic, B. Thomson, and J. Williams. 2013. | 1606.03152#42 | 1606.03152#44 | 1606.03152 | [
"1511.08099"
] |
1606.03152#44 | Policy Networks with Two-Stage Training for Dialogue Systems | POMDP-based statistical spoken dialog systems: A review. Proc. IEEE 101(5):1160â 1179. # A Speciï¬ cations of restaurant search in DTSC2 Constraint slots area, type of food, price range. Request slots area, type of food, address, name, price range, postcode, signature dish, phone number Table 1: Summary actions. Action Description Cannot help No restaurant in the database matches the userâ s constraints. Conï¬ rm Domain Explicit Conï¬ rm Offer Conï¬ rm that the user is looking for a restaurant. Ask the user to conï¬ rm a piece of information. Propose a restaurant to the user. Repeat Ask the user to repeat. Request Request a slot from the user. Select Ask the user to select a value between two propositions (e.g. select between Italian and Indian). Table 2: User actions. Action Description Deny Deny a piece of information. Null Say nothing. Request More Conï¬ rm Request more options. Ask the system to conï¬ rm a piece of information. Acknowledge Acknowledge. Afï¬ rm Say yes. | 1606.03152#43 | 1606.03152#45 | 1606.03152 | [
"1511.08099"
] |
1606.03152#45 | Policy Networks with Two-Stage Training for Dialogue Systems | Request Request a slot value. Inform Inform the system of a slot value. Thank you Thank the system. Repeat Ask the system to repeat. Request Alternatives Request alternative restaurant options. Negate Say no. Bye Say goodbye to the system. Hello Say hello to the system. Restart the system to restart | 1606.03152#44 | 1606.03152 | [
"1511.08099"
] |
|
1606.02960#0 | Sequence-to-Sequence Learning as Beam-Search Optimization | 6 1 0 2 v o N 0 1 ] L C . s c [ 2 v 0 6 9 2 0 . 6 0 6 1 : v i X r a # Sequence-to-Sequence Learning as Beam-Search Optimization Sam Wiseman and Alexander M. Rush School of Engineering and Applied Sciences Harvard University Cambridge, MA, USA {swiseman,srush}@seas.harvard.edu # Abstract Sequence-to-Sequence (seq2seq) modeling has rapidly become an important general- purpose NLP tool that has proven effective for many text-generation and sequence-labeling tasks. Seq2seq builds on deep neural language modeling and inherits its remarkable accuracy in estimating local, next-word distributions. In this work, we introduce a model and beam- search training scheme, based on the work of Daum´e III and Marcu (2005), that extends seq2seq to learn global sequence scores. This structured approach avoids classical biases as- sociated with local training and uniï¬ es the training loss with the test-time usage, while preserving the proven model architecture of seq2seq and its efï¬ cient training approach. We show that our system outperforms a highly- optimized attention-based seq2seq system and other baselines on three different sequence to sequence tasks: word ordering, parsing, and machine translation. text generation applications, such as image or video captioning (Venugopalan et al., 2015; Xu et al., 2015). The dominant approach to training a seq2seq sys- tem is as a conditional language model, with training maximizing the likelihood of each successive tar- get word conditioned on the input sequence and the gold history of target words. Thus, training uses a strictly word-level loss, usually cross-entropy over the target vocabulary. | 1606.02960#1 | 1606.02960 | [
"1604.08633"
] |
|
1606.02960#1 | Sequence-to-Sequence Learning as Beam-Search Optimization | This approach has proven to be very effective and efï¬ cient for training neural lan- guage models, and seq2seq models similarly obtain impressive perplexities for word-generation tasks. Notably, however, seq2seq models are not used as conditional language models at test-time; they must instead generate fully-formed word sequences. In practice, generation is accomplished by searching over output sequences greedily or with beam search. In this context, Ranzato et al. (2016) note that the combination of the training and generation scheme just described leads to at least two major issues: | 1606.02960#0 | 1606.02960#2 | 1606.02960 | [
"1604.08633"
] |
1606.02960#2 | Sequence-to-Sequence Learning as Beam-Search Optimization | 1 # 1 Introduction Sequence-to-Sequence learning with deep neural networks (herein, seq2seq) (Sutskever et al., 2011; Sutskever et al., 2014) has rapidly become a very useful and surprisingly general-purpose tool for nat- In addition to demon- ural language processing. strating impressive results for machine translation (Bahdanau et al., 2015), roughly the same model and training have also proven to be useful for sen- tence compression (Filippova et al., 2015), parsing (Vinyals et al., 2015), and dialogue systems (Ser- ban et al., 2016), and they additionally underlie other 1. | 1606.02960#1 | 1606.02960#3 | 1606.02960 | [
"1604.08633"
] |
1606.02960#3 | Sequence-to-Sequence Learning as Beam-Search Optimization | Exposure Bias: the model is never exposed to its own errors during training, and so the in- ferred histories at test-time do not resemble the gold training histories. training uses a word-level loss, while at test-time we target improving sequence-level evaluation metrics, such as BLEU (Papineni et al., 2002). We might additionally add the concern of label bias (Lafferty et al., 2001) to the list, since word- probabilities at each time-step are locally normal- ized, guaranteeing that successors of incorrect his- | 1606.02960#2 | 1606.02960#4 | 1606.02960 | [
"1604.08633"
] |
1606.02960#4 | Sequence-to-Sequence Learning as Beam-Search Optimization | tories receive the same mass as do the successors of the true history. In this work we develop a non-probabilistic vari- ant of the seq2seq model that can assign a score to any possible target sequence, and we propose a training procedure, inspired by the learning as search optimization (LaSO) framework of Daum´e III and Marcu (2005), that deï¬ nes a loss function in terms of errors made during beam search. Fur- thermore, we provide an efï¬ cient algorithm to back- propagate through the beam-search procedure dur- ing seq2seq training. This approach offers a possible solution to each of the three aforementioned issues, while largely maintaining the model architecture and training ef- ï¬ ciency of standard seq2seq learning. Moreover, by scoring sequences rather than words, our ap- proach also allows for enforcing hard-constraints on sequence generation at training time. To test out the effectiveness of the proposed approach, we develop a general-purpose seq2seq system with beam search optimization. We run experiments on three very dif- ferent problems: word ordering, syntactic parsing, and machine translation, and compare to a highly- tuned seq2seq system with attention (Luong et al., 2015). The version with beam search optimization shows signiï¬ cant improvements on all three tasks, and particular improvements on tasks that require difï¬ cult search. # 2 Related Work | 1606.02960#3 | 1606.02960#5 | 1606.02960 | [
"1604.08633"
] |
1606.02960#5 | Sequence-to-Sequence Learning as Beam-Search Optimization | The issues of exposure bias and label bias have re- ceived much attention from authors in the structured prediction community, and we brieï¬ y review some of this work here. One prominent approach to com- bating exposure bias is that of SEARN (Daum´e III et al., 2009), a meta-training algorithm that learns a search policy in the form of a cost-sensitive classiï¬ er trained on examples generated from an interpolation of an oracle policy and the modelâ s current (learned) policy. Thus, SEARN explicitly targets the mis- match between oracular training and non-oracular (often greedy) test-time inference by training on the output of the modelâ s own policy. DAgger (Ross et al., 2011) is a similar approach, which differs in terms of how training examples are generated and aggregated, and there have additionally been impor- | 1606.02960#4 | 1606.02960#6 | 1606.02960 | [
"1604.08633"
] |
1606.02960#6 | Sequence-to-Sequence Learning as Beam-Search Optimization | tant reï¬ nements to this style of training over the past several years (Chang et al., 2015). When it comes to training RNNs, SEARN/DAgger has been applied under the name â scheduled samplingâ (Bengio et al., 2015), which involves training an RNN to generate the t + 1â st token in a target sequence after consum- ing either the true tâ th token, or, with probability that increases throughout training, the predicted tâ th to- ken. is uncom- mon to use beam search when training with SEARN/DAgger. The early-update (Collins and Roark, 2004) and LaSO (Daum´e III and Marcu, 2005) training strategies, however, explicitly ac- count for beam search, and describe strategies for updating parameters when the gold structure be- comes unreachable during search. Early update and LaSO differ primarily in that the former discards a training example after the ï¬ rst search error, whereas LaSO resumes searching after an error from a state that includes the gold partial structure. In the con- text of feed-forward neural network training, early update training has been recently explored in a feed- forward setting by Zhou et al. (2015) and Andor et al. (2016). Our work differs in that we adopt a LaSO-like paradigm (with some minor modiï¬ ca- tions), and apply it to the training of seq2seq RNNs (rather than feed-forward networks). We also note that Watanabe and Sumita (2015) apply maximum- violation training (Huang et al., 2012), which is sim- ilar to early-update, to a parsing model with recur- rent components, and that Yazdani and Henderson (2015) use beam-search in training a discriminative, locally normalized dependency parser with recurrent components. Recently authors have also proposed alleviating exposure bias using techniques from reinforcement learning. Ranzato et al. (2016) follow this ap- proach to train RNN decoders in a seq2seq model, and they obtain consistent improvements in perfor- mance, even over models trained with scheduled sampling. As Daum´e III and Marcu (2005) note, LaSO is similar to reinforcement learning, except it does not require â | 1606.02960#5 | 1606.02960#7 | 1606.02960 | [
"1604.08633"
] |
1606.02960#7 | Sequence-to-Sequence Learning as Beam-Search Optimization | explorationâ in the same way. Such exploration may be unnecessary in supervised text-generation, since we typically know the gold partial sequences at each time-step. Shen et al. (2016) use minimum risk training (approximated by sampling) to address the issues of exposure bias and loss-evaluation mismatch for seq2seq MT, and show impressive performance gains. Whereas exposure bias results from training in a certain way, label bias results from properties of the model itself. In particular, label bias is likely to affect structured models that make sub-structure predictions using locally-normalized scores. Be- cause the neural and non-neural literature on this point has recently been reviewed by Andor et al. (2016), we simply note here that RNN models are typically locally normalized, and we are unaware of any speciï¬ cally seq2seq work with RNNs that does not use locally-normalized scores. The model we introduce here, however, is not locally normalized, and so should not suffer from label bias. We also note that there are some (non-seq2seq) exceptions to the trend of locally normalized RNNs, such as the work of Sak et al. (2014) and Voigtlaender et al. (2015), who train LSTMs in the context of HMMs for speech recognition using sequence-level objec- tives; their work does not consider search, however. | 1606.02960#6 | 1606.02960#8 | 1606.02960 | [
"1604.08633"
] |
1606.02960#8 | Sequence-to-Sequence Learning as Beam-Search Optimization | # 3 Background and Notation In the simplest seq2seq scenario, we are given a col- lection of source-target sequence pairs and tasked with learning to generate target sequences from source sequences. For instance, we might view ma- chine translation in this way, where in particular we attempt to generate English sentences from (corre- sponding) French sentences. Seq2seq models are part of the broader class of â encoder-decoderâ mod- els (Cho et al., 2014), which ï¬ rst use an encoding model to transform a source object into an encoded representation x. Many different sequential (and non-sequential) encoders have proven to be effec- tive for different source domains. In this work we are agnostic to the form of the encoding model, and simply assume an abstract source representation x. Once the input sequence is encoded, seq2seq models generate a target sequence using a decoder. The decoder is tasked with generating a target se- quence of words from a target vocabulary V. In particular, words are generated sequentially by con- ditioning on the input representation x and on the previously generated words or history. We use the notation w1:T to refer to an arbitrary word sequence of length T , and the notation y1:T to refer to the gold (i.e., correct) target word sequence for an input x. Most seq2seq systems utilize a recurrent neural network (RNN) for the decoder model. Formally, a recurrent neural network is a parameterized non- linear function RNN that recursively maps a se- quence of vectors to a sequence of hidden states. Let m1, . . . , mT be a sequence of T vectors, and let h0 be some initial state vector. Applying an RNN to any such sequence yields hidden states ht at each time-step t, as follows: ht â RNN(mt, htâ 1; θ), where θ is the set of model parameters, which are In this work, the vectors mt shared over time. will always correspond to the embeddings of a tar- get word sequence w1:T , and so we will also write ht â RNN(wt, htâ 1; θ), with wt standing in for its embedding. RNN decoders are typically trained to act as con- ditional language models. That is, one attempts to model the probability of the ¢â | 1606.02960#7 | 1606.02960#9 | 1606.02960 | [
"1604.08633"
] |
1606.02960#9 | Sequence-to-Sequence Learning as Beam-Search Optimization | th target word con- ditioned on « and the target history by stipulating that p(w;|wi-2â 1, ©) = g(wz, he_-1, x), for some pa- rameterized function g typically computed with an affine layer followed by a softmax. In computing these probabilities, the state hy_, represents the tar- get history, and ho is typically set to be some func- tion of a. The complete model (including encoder) is trained, analogously to a neural language model, to minimize the cross-entropy loss at each time-step while conditioning on the gold history in the train- ing data. That is, the model is trained to minimize â | 1606.02960#8 | 1606.02960#10 | 1606.02960 | [
"1604.08633"
] |
1606.02960#10 | Sequence-to-Sequence Learning as Beam-Search Optimization | InTT 2; p(yelgnt1, 2). t=1 p(yt|y1:tâ 1, x). discrete se- is quence generation can be performed by approx- imately maximizing the probability of the tar- get sequence under the conditional distribution, Ë y1:T = argbeamw1:T t=1 p(wt|w1:tâ 1, x), where we use the notation argbeam to emphasize that the decoding process requires heuristic search, since the RNN model is non-Markovian. In practice, a simple beam search procedure that explores K prospective histories at each time-step has proven to be an effec- tive decoding approach. However, as noted above, decoding in this manner after conditional language- model style training potentially suffers from the is- | 1606.02960#9 | 1606.02960#11 | 1606.02960 | [
"1604.08633"
] |
1606.02960#11 | Sequence-to-Sequence Learning as Beam-Search Optimization | sues of exposure bias and label bias, which moti- vates the work of this paper. # 4 Beam Search Optimization We begin by making one small change to the seq2seq modeling framework. Instead of predicting the probability of the next word, we instead learn to produce (non-probabilistic) scores for ranking se- quences. Deï¬ ne the score of a sequence consisting of history w1:tâ 1 followed by a single word wt as f (wt, htâ 1, x), where f is a parameterized function examining the current hidden-state of the relevant RNN at time t â 1 as well as the input representa- tion x. In experiments, our f will have an identi- cal form to g but without the ï¬ | 1606.02960#10 | 1606.02960#12 | 1606.02960 | [
"1604.08633"
] |
1606.02960#12 | Sequence-to-Sequence Learning as Beam-Search Optimization | nal softmax transfor- mation (which transforms unnormalized scores into probabilities), thereby allowing the model to avoid issues associated with the label bias problem. More importantly, we also modify how this model is trained. Ideally we would train by comparing the gold sequence to the highest-scoring complete sequence. However, because ï¬ nding the argmax sequence according to this model is intractable, we propose to adopt a LaSO-like (Daum´e III and Marcu, 2005) scheme to train, which we will re- fer to as beam search optimization (BSO). In par- ticular, we deï¬ ne a loss that penalizes the gold se- quence falling off the beam during training.1 The proposed training approach is a simple way to ex- pose the model to incorrect histories and to match the training procedure to test generation. Further- more we show that it can be implemented efï¬ ciently without changing the asymptotic run-time of train- ing, beyond a factor of the beam size K. # 4.1 Search-Based Loss We now formalize this notion of a search-based loss for RNN training. Assume we have a set St of K candidate sequences of length t. We can calculate a score for each sequence in St using a scoring func- tion f parameterized with an RNN, as above, and we deï¬ ne the sequence Ë y(K) 1:t â St to be the Kâ | 1606.02960#11 | 1606.02960#13 | 1606.02960 | [
"1604.08633"
] |
1606.02960#13 | Sequence-to-Sequence Learning as Beam-Search Optimization | th ranked 1Using a non-probabilistic model further allows us to incur no loss (and thus require no update to parameters) when the gold sequence is on the beam; this contrasts with models based on a CRF loss, such as those of Andor et al. (2016) and Zhou et al. (2015), though in training those models are simply not updated when the gold sequence remains on the beam. sequence in St according to f . That is, assuming distinct scores, |{Ë y(k) 1:t â St | f (Ë y(k) t , Ë h (k) tâ 1) > f (Ë y(K) t , Ë h (K) tâ 1)}| = K â 1, # oo (k) where Ë y(k) tâ 1 is the RNN state corresponding to its t â 1â st step, and where we have omitted the x argument to f for brevity. We now deï¬ ne a loss function that gives loss each time the score of the gold preï¬ x y1:t does not exceed that of Ë y(K) # Lif) = T T Daa?) [1- a). F (yes rea) + FG Above, the â (Ë y(K) 1:t ) term denotes a mistake-speciï¬ c cost-function, which allows us to scale the loss de- pending on the severity of erroneously predicting Ë y(K) 1:t ; it is assumed to return 0 when the margin re- quirement is satisï¬ ed, and a positive number other- wise. | 1606.02960#12 | 1606.02960#14 | 1606.02960 | [
"1604.08633"
] |
1606.02960#14 | Sequence-to-Sequence Learning as Beam-Search Optimization | It is this term that allows us to use sequence- rather than word-level costs in training (addressing the 2nd issue in the introduction). For instance, when training a seq2seq model for machine trans- lation, it may be desirable to have â (Ë y(K) 1:t ) be in- versely related to the partial sentence-level BLEU score of Ë y(K) 1:t with y1:t; we experiment along these lines in Section 5.3. Finally, because we want the full gold sequence to be at the top of the beam at the end of search, when t = T we modify the loss to require the score of y1:T to exceed the score of the highest ranked incorrect prediction by a margin. We can optimize the loss L using a two-step pro- cess: (1) in a forward pass, we compute candidate sets St and record margin violations (sequences with non-zero loss); (2) in a backward pass, we back- propagate the errors through the seq2seq RNNs. Un- like standard seq2seq training, the ï¬ rst-step requires running search (in our case beam search) to ï¬ | 1606.02960#13 | 1606.02960#15 | 1606.02960 | [
"1604.08633"
] |
1606.02960#15 | Sequence-to-Sequence Learning as Beam-Search Optimization | nd margin violations. The second step can be done by adapting back-propagation through time (BPTT). We next discuss the details of this process. # 4.2 Forward: Find Violations In order to minimize this loss, we need to specify a procedure for constructing candidate sequences Ë y(k) 1:t at each time step t so that we ï¬ nd margin viola- tions. We follow LaSO (rather than early-update 2; see Section 2) and build candidates in a recursive If there was no margin violation at tâ 1, manner. then St is constructed using a standard beam search update. If there was a margin violation, St is con- structed as the K best sequences assuming the gold history y1:tâ 1 through time-step tâ 1. Formally, assume the function succ maps a se- quence w1:tâ 1 â V tâ 1 to the set of all valid se- quences of length t that can be formed by appending to it a valid word w â V. In the simplest, uncon- strained case, we will have succ(w1:tâ 1) = {w1:tâ 1, w | w â V}. As an important aside, note that for some prob- lems it may be preferable to deï¬ ne a succ func- tion which imposes hard constraints on successor sequences. For instance, if we would like to use seq2seq models for parsing (by emitting a con- stituency or dependency structure encoded into a se- quence in some way), we will have hard constraints on the sequences the model can output, namely, that they represent valid parses. While hard constraints such as these would be difï¬ cult to add to standard seq2seq at training time, in our framework they can naturally be added to the succ function, allowing us to train with hard constraints; we experiment along these lines in Section 5.3, where we refer to a model trained with constrained beam search as ConBSO. Having deï¬ ned an appropriate succ function, we specify the candidate set as: succ(y1:1-1) violation at tâ 1 ( S; = topk P UL, suce(g 4) 1) otherwise, where we have a margin violation at tâ 1 iff (K) f (ytâ 1, htâ 2) < f (Ë y(K) tâ | 1606.02960#14 | 1606.02960#16 | 1606.02960 | [
"1604.08633"
] |
1606.02960#16 | Sequence-to-Sequence Learning as Beam-Search Optimization | 1 , Ë h tâ 2) + 1, and where topK considers the scores given by f . This search procedure is illustrated in the top portion of Figure 1. In the forward pass of our training algorithm, shown as the ï¬ rst part of Algorithm 1, we run this version of beam search and collect all sequences and their hidden states that lead to losses. 2We found that training with early-update rather than (de- layed) LaSO did not work well, even after pre-training. Given the success of early-update in many NLP tasks this was some- what surprising. We leave this question to future work. | 1606.02960#15 | 1606.02960#17 | 1606.02960 | [
"1604.08633"
] |
1606.02960#17 | Sequence-to-Sequence Learning as Beam-Search Optimization | smells } {ome } (today } barks } Friday ) Y barks ) My straight \ now {Eun (otar =a) (etsy) {_ dog {__ the (et) ine en dog ) GE) Eg {ae} {biue }â ( dog (Tiome }â { today} }â +( barks ) Figure 1: Top: possible Ë y(k) 1:t formed in training with a beam of size K = 3 and with gold sequence y1:6 = â | 1606.02960#16 | 1606.02960#18 | 1606.02960 | [
"1604.08633"
] |
1606.02960#18 | Sequence-to-Sequence Learning as Beam-Search Optimization | a red dog runs quickly todayâ . The gold sequence is high- lighted in yellow, and the predicted preï¬ xes involved in margin violations (at t = 4 and t = 6) are in gray. Note that time-step T = 6 uses a different loss criterion. Bot- tom: preï¬ xes that actually participate in the loss, ar- ranged to illustrate the back-propagation process. # 4.3 Backward: Merge Sequences Once we have collected margin violations we can run backpropagation to compute parameter updates. Assume a margin violation occurs at time-step t be- tween the predicted history Ë | 1606.02960#17 | 1606.02960#19 | 1606.02960 | [
"1604.08633"
] |
1606.02960#19 | Sequence-to-Sequence Learning as Beam-Search Optimization | y(K) 1:t and the gold his- tory y1:t. As in standard seq2seq training we must back-propagate this error through the gold history; however, unlike seq2seq we also have a gradient for the wrongly predicted history. Recall that to back-propagate errors through an RNN we run a recursive backward procedure â denoted below by BRNN â at each time-step t, which accumulates the gradients of next-step and fu- ture losses with respect to ht. | 1606.02960#18 | 1606.02960#20 | 1606.02960 | [
"1604.08633"
] |
1606.02960#20 | Sequence-to-Sequence Learning as Beam-Search Optimization | We have: # â htL â BRNN(â htLt+1, â ht+1L), where Lt+1 is the loss at step t + 1, deriving, for instance, from the score f (yt+1, ht). Running this BRNN procedure from t = T â 1 to t = 0 is known as back-propagation through time (BPTT). In determining the total computational cost of back-propagation here, ï¬ rst note that in the worst case there is one violation at each time-step, which leads to T independent, incorrect sequences. Since we need to call BRNN O(T ) times for each se- quence, a naive strategy of running BPTT for each incorrect sequence would lead to an O(T 2) back- ward pass, rather than the O(T ) time required for the standard seq2seq approach. Fortunately, our combination of search-strategy and loss make it possible to efï¬ ciently share BRNN operations. This shared structure comes naturally from the LaSO update, which resets the beam in a convenient way. We informally illustrate the process in Figure 1. The top of the diagram shows a possible sequence of Ë y(k) 1:t formed during search with a beam of size 3 for the target sequence y = â | 1606.02960#19 | 1606.02960#21 | 1606.02960 | [
"1604.08633"
] |
1606.02960#21 | Sequence-to-Sequence Learning as Beam-Search Optimization | a red dog runs quickly today.â When the gold sequence falls off the beam at t = 4, search resumes with S5 = succ(y1:4), and so all subsequent predicted sequences have y1:4 as a preï¬ x and are thus functions of h4. Moreover, be- cause our loss function only involves the scores of the gold preï¬ x and the violating preï¬ x, we end up with the relatively simple computation tree shown at the bottom of Figure 1. It is evident that we can backpropagate in a single pass, accumulating gradi- ents from sequences that diverge from the gold at the time-step that precedes their divergence. The second half of Algorithm 1 shows this explicitly for a single sequence, though it is straightforward to extend the algorithm to operate in batch.3 # 5 Data and Methods We run experiments on three different tasks, com- paring our approach to the seq2seq baseline, and to other relevant baselines. # 5.1 Model While the method we describe applies to seq2seq RNNs in general, for all experiments we use the global attention model of Luong et al. (2015) â which consists of an LSTM (Hochreiter and Schmidhuber, 1997) encoder and an LSTM decoder with a global attention model â as both the base- line seq2seq model (i.e., as the model that computes the g in Section 3) and as the model that computes our sequence-scores f (wt, htâ | 1606.02960#20 | 1606.02960#22 | 1606.02960 | [
"1604.08633"
] |
1606.02960#22 | Sequence-to-Sequence Learning as Beam-Search Optimization | 1, x). As in Luong et al. (2015), we also use â input feeding,â which involves feeding the attention distribution from the previous time-step into the decoder at the current step. This model architecture has been found to be highly performant for neural machine translation and other seq2seq tasks. 3We also note that because we do not update the parameters until after the T â th search step, our training procedure differs slightly from LaSO (which is online), and in this aspect is essen- tially equivalent to the â delayed LaSO updateâ | 1606.02960#21 | 1606.02960#23 | 1606.02960 | [
"1604.08633"
] |
1606.02960#23 | Sequence-to-Sequence Learning as Beam-Search Optimization | of Bj¨orkelund and Kuhn (2014). Algorithm 1 Seq2seq Beam-Search Optimization 1: procedure BSO(x, Ktr, succ) 2: 3: 4: 5: /*FORWARD*/ Init empty storage Ë y1:T and Ë h1:T ; init S1 r â 0; violations â {0} for t = 1, . . . , T do empty storage 71.7 4: r + 0; violations ~ {0} 5: 6 for t = ., Ido K=Ky ift AT else argmax f(g, A ) ky of ay + 7: if f(y:,hrâ 1) < FG Ae *)) 41 then fer he, 9: Drat â i), 10: Add t to violations 11: ret 12: Si41 < topK(suce(y1:4)) 13: else 14: S41 < topK(U_, suce(g(*))) 15: /*BACKWARD*/ . 16: grad_hr <â grad_hr <0 17: for t = T â , ldo 18: gradi he CBRNN(s,£e0, gradi Ait) 19: gradhy <â BRNN(V;, £41, grad. hiv) 20: if t â 1 â ¬ violations then . 21: grad_h;, â grad_h, + grad_h, 22: grad_h, +0 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: | 1606.02960#22 | 1606.02960#24 | 1606.02960 | [
"1604.08633"
] |
1606.02960#24 | Sequence-to-Sequence Learning as Beam-Search Optimization | To distinguish the models we refer to our system as BSO (beam search optimization) and to the base- line as seq2seq. When we apply constrained training (as discussed in Section 4.2), we refer to the model as ConBSO. In providing results we also distinguish between the beam size Ktr with which the model is trained, and the beam size Kte which is used at test-time. In general, if we plan on evaluating with a beam of size Kte it makes sense to train with a beam of size Ktr = Kte + 1, since our objective requires the gold sequence to be scored higher than the last sequence on the beam. | 1606.02960#23 | 1606.02960#25 | 1606.02960 | [
"1604.08633"
] |
1606.02960#25 | Sequence-to-Sequence Learning as Beam-Search Optimization | # 5.2 Methodology Here we detail additional techniques we found nec- essary to ensure the model learned effectively. First, we found that the model failed to learn when trained from a random initialization.4 We therefore found it necessary to pre-train the model using a standard, word-level cross-entropy loss as described in Sec- 4This may be because there is relatively little signal in the sparse, sequence-level gradient, but this point requires further investigation. tion 3. The necessity of pre-training in this instance is consistent with the ï¬ ndings of other authors who train non-local neural models (Kingsbury, 2009; Sak et al., 2014; Andor et al., 2016; Ranzato et al., 2016).5 Similarly, it is clear that the smaller the beam used in training is, the less room the model has to make erroneous predictions without running afoul of the margin loss. Accordingly, we also found it use- ful to use a â curriculum beamâ strategy in training, whereby the size of the beam is increased gradually during training. In particular, given a desired train- ing beam size Ktr, we began training with a beam of size 2, and increased it by 1 every 2 epochs until reaching Ktr. Finally, it has been established that dropout (Sri- vastava et al., 2014) regularization improves the per- formance of LSTMs (Pham et al., 2014; Zaremba et al., 2014), and in our experiments we run beam search under dropout.6 For all experiments, we trained both seq2seq and BSO models with mini-batch Adagrad (Duchi et al., 2011) (using batches of size 64), and we renormal- ized all gradients so they did not exceed 5 before updating parameters. We did not extensively tune learning-rates, but we found initial rates of 0.02 for the encoder and decoder LSTMs, and a rate of 0.1 or 0.2 for the ï¬ nal linear layer (i.e., the layer tasked with making word-predictions at each time- step) to work well across all the tasks we consid- ered. Code implementing the experiments described below can be found at https://github.com/ harvardnlp/BSO.7 | 1606.02960#24 | 1606.02960#26 | 1606.02960 | [
"1604.08633"
] |
1606.02960#26 | Sequence-to-Sequence Learning as Beam-Search Optimization | # 5.3 Tasks and Results Our experiments are primarily intended to evaluate the effectiveness of beam search optimization over standard seq2seq training. As such, we run exper- iments with the same model across three very dif- 5Andor et al. (2016) found, however, that pre-training only increased convergence-speed, but was not necessary for obtain- ing good results. 6However, it is important to ensure that the same mask ap- plied at each time-step of the forward search is also applied at the corresponding step of the backward pass. We accomplish this by pre-computing masks for each time-step, and sharing them between the partial sequence LSTMs. | 1606.02960#25 | 1606.02960#27 | 1606.02960 | [
"1604.08633"
] |
1606.02960#27 | Sequence-to-Sequence Learning as Beam-Search Optimization | 7Our code is based on Yoon Kimâ s seq2seq code, https: //github.com/harvardnlp/seq2seq-attn. ferent problems: word ordering, dependency pars- ing, and machine translation. While we do not in- clude all the features and extensions necessary to reach state-of-the-art performance, even the baseline seq2seq model is generally quite performant. Word Ordering The task of correctly ordering the words in a shufï¬ ed sentence has recently gained some attention as a way to test the (syntactic) capa- bilities of text-generation systems (Zhang and Clark, 2011; Zhang and Clark, 2015; Liu et al., 2015; Schmaltz et al., 2016). We cast this task as seq2seq problem by viewing a shufï¬ ed sentence as a source sentence, and the correctly ordered sentence as the target. While word ordering is a somewhat synthetic task, it has two interesting properties for our pur- poses. First, it is a task which plausibly requires search (due to the exponentially many possible or- derings), and, second, there is a clear hard constraint on output sequences, namely, that they be a permu- tation of the source sequence. For both the baseline and BSO models we enforce this constraint at test- time. However, we also experiment with constrain- ing the BSO model during training, as described in Section 4.2, by deï¬ | 1606.02960#26 | 1606.02960#28 | 1606.02960 | [
"1604.08633"
] |
1606.02960#28 | Sequence-to-Sequence Learning as Beam-Search Optimization | ning the succ function to only al- low successor sequences containing un-used words in the source sentence. For experiments, we use the same PTB dataset (with the standard training, development, and test splits) and evaluation procedure as in Zhang and Clark (2015) and later work, with performance re- ported in terms of BLEU score with the correctly or- dered sentences. For all word-ordering experiments we use 2-layer encoder and decoder LSTMs, each with 256 hidden units, and dropout with a rate of 0.2 between LSTM layers. We use simple 0/1 costs in deï¬ | 1606.02960#27 | 1606.02960#29 | 1606.02960 | [
"1604.08633"
] |
1606.02960#29 | Sequence-to-Sequence Learning as Beam-Search Optimization | ning the â function. We show our test-set results in Table 1. We see that on this task there is a large improvement at each beam size from switching to BSO, and a further im- provement from using the constrained model. Inspired by a similar analysis in Daum´e III and Marcu (2005), we further examine the relationship between Ktr and Kte when training with ConBSO in Table 2. We see that larger Ktr hurt greedy in- ference, but that results continue to improve, at least initially, when using a Kte that is (somewhat) bigger than Ktr â | 1606.02960#28 | 1606.02960#30 | 1606.02960 | [
"1604.08633"
] |
1606.02960#30 | Sequence-to-Sequence Learning as Beam-Search Optimization | 1. Word Ordering (BLEU) Kte = 1 Kte = 5 Kte = 10 seq2seq BSO ConBSO 25.2 28.0 28.6 29.8 33.2 34.3 31.0 34.3 34.5 LSTM-LM 15.4 - 26.8 Table 1: Word ordering. BLEU Scores of seq2seq, BSO, constrained BSO, and a vanilla LSTM language model (from Schmaltz et al, 2016). | 1606.02960#29 | 1606.02960#31 | 1606.02960 | [
"1604.08633"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.